Call us +971 4 2428486

CAN YOU TRUST AI?

AI is becoming increasingly powerful, and capable. It is now being used to perform tasks that were exclusively entrusted to people, such as driving cars or diagnosing breast cancer. Can we trust AI in critical decision making?

To answer this question, it is important to understand what AI can do and can’t do. In general, AI is a system to make decisions based on previous examples rather than having them explicitly programmed in. For example, if you want an AI to recognise cats, you give it a lot of labelled pictures of cats and it will gradually learn from these.

But AI doesn’t have human level intelligence. It learns completely differently from people and without the benefit of broad context or any other knowledge. Results are very dependent on having a broad range of training data. There’s one famous (possibly anecdotal) example where AI was trained to recognize types of tanks. It learned quickly and got a high accuracy on the training and validation data sets given. But once it was tried in real world situations, it completely failed. It turned out that all the pictures of one type of tank were taken on a sunny day while the other pictures were on an overcast day, so the AI simply learned the difference in lighting levels.

Many AI systems have a very narrow focus. For example, face recognition is an AI feature that is often offered in commercial software, because there are some fairly mature and well tested algorithms available. This technology is best in very controlled situations like passport control at an airport where cameras are at face level rather than with high mounted surveillance cameras which don’t have a clear view of the face, and where the system has had the opportunity to properly learn the features of each face. But if there are two strangers fighting outside, face recognition won’t return a name or detect that they are fighting. Again, AI is useful in a narrow context rather than a broad context.

Similarly there are some video surveillance solutions that purport to automatically detect certain behaviours such as someone carrying a weapon or fighting. These are typically based on deep learning approaches that learned from labelled video showing various activities. So if it is able to match the current scene with one that it has learned, then it may identify a fight. But if the fight is different to what it has learned from, the system may not detect anything. It is largely limited to learning data it was trained with.

iCetana takes a different approach by learning the difference between normal and abnormal movement. This is a purely mathematical operation. It then uses human judgement to determine whether the abnormal movement is important or not. At the current level of AI technology, it is impossible for a practical solution to have the broad context and life experience that an operator can provide. iCetana focuses on filtering out irrelevant camera feeds so that the operator is not overwhelmed. This is an appropriate and safe use of AI.

There have been some cases where autonomous vehicles have crashed, killing the occupants. In particular, there have been two cases where Tesla’s Autopilot was engaged but the vehicle crashed due to over-reliance on the system by the driver. AutoPilot is not meant to be a fully autonomous system so still needs oversight, but since it handles freeway driving very well, sometimes the driver can become complacent, disengage and put too much trust in the system.

To answer the original question, AI is not ready to be trusted without human oversight. The “thought” processes used by AI are very unlike how a person assesses something, using broad context and wide experience and this fundamental difference can be overlooked. It is best to use AI with someone who can verify the final action rather than just it act autonomously. This is similar to an aircraft auto-landing system. It can help a pilot, but he must still be alert and ready to step in in any unusual situation.

We believe that this is the right approach to AI. AI should be used to enhance human judgement, not replace it. We don’t want a situation where a cancer specialist is ignored and a robot automatically irradiates a person based on its judgement of a tumour, or an AI ignores someone being stabbed because it doesn’t fit its profile of a fight, when a security operator would recognise this and act immediately. At iCetana, our focus is on using AI in a way that is focused on what computers are good at – quickly processing large amounts of data, and helping people with what they are best at – using judgement and being able to interpret a situation.

Source: https://blog.icetana.com/canyoutrustai?utm_content=83550194&utm_medium=social&utm_source=linkedin&hss_channel=lcp-989995

Tags: , , , , , , , ,

Related posts

Leave a Comment

Leave a Reply

Your email address will not be published.




Top