The Poisoned Chalice: Pitfalls of Artificial Intelligence 

   Donghoon Kim | Aug 26, 2019 | 10 min read 

Artificial intelligence is a fast-growing and evolving field with a wide array of applications, including natural language processing, generative AI, image recognition, and computer vision. The immense potential of AI is drawing global investments and the widespread adoption of AI tools. However, amidst this excitement, there are also concerns about AI’s limitations and accuracy, casting shadows of uncertainty. 

Meow, I’m an adorable cat(?) 

Keras.js is a JavaScript library that allows you to run deep learning models created with Keras, a popular deep learning framework, directly within a web browser. One of the models available is Inception v3, designed primarily for image classification and object recognition. This model is pre-trained with predefined classifications encompassing 1000 different animals or objects, offering a probability-based assessment of how closely an image aligns with these criteria. Furthermore, users have the option to directly test the model by uploading their own images. Notably, it achieves a remarkable level of accuracy when applied to animals or objects it has been previously trained on. It can not only identify cats but also distinguish and categorize them into specific cat breeds. 

An image of a cheetah correctly recognized as a cheetah

For example, in this case, an image of a cheetah is being recognized correctly as a cheetah with a rather high accuracy. After slight modifications to the image, it still appears as a cheetah to the human eyes, but the classification model identifies it as a cat, specifically an adorable Persian cat. 

An image of a cheetah recognized as a Persian cat

Using artificial intelligence technology, it’s possible to make judgments that mimic human decision-making. However, with malicious intent, one can manipulate AI models to generate false results in a direction they desire, even in cases where a human eye would normally make accurate judgments. 

The Current State of Artificial Intelligence and Its Limitations 

Among machine learning techniques, deep learning algorithms have brought revolutionary advancements, particularly in the field of computer vision. Google’s deep learning-based facial recognition model boasts an impressive accuracy rate of 99.96%. Considering the average human facial recognition rate is around 97%, this demonstrates a superior performance. Moreover, in areas that were thought to be beyond AI’s capabilities, such as the game of Go, AI’s victory over humans using deep learning technology had a profound impact on the world. Additionally, in the medical industry, AI has shown its potential by accurately detecting even small anomalies that doctors might overlook in practical scenarios. 

Source: ChosunBiz

While artificial intelligence technology offers versatile applications across various fields, it also has its limitations. As seen in the example of the image classification algorithm misclassifying an image into a different category, it operates using algorithms vastly different from human perception. This often results in errors that seem unreasonable from a human perspective since AI attempts to “mimic” human recognition but does so through different mechanisms. Research is ongoing to address this issue, but a definitive solution has yet to be found. 

One significant drawback often mentioned among the major advancements of machine learning, including AI, is the inability to answer the question of “how” certain results were obtained. Deep learning models are essentially black boxes. While we may understand them mathematically, they remain largely incomprehensible in ways that we humans understand. For instance, we might not know which aspects of an image the deep learning model is using to determine whether it’s a cheetah or not. This opacity makes it challenging to easily discern why the model recognized a cheetah image as a cat and how to improve it. The lack of transparency in deep learning makes it more difficult to work with, as engineers struggle to determine how to adjust the model’s numerous hyperparameters, identify its shortcomings, and improve its performance. 

Furthermore, artificial intelligence relies heavily on data-driven inference, unlike humans who can quickly learn from a few examples. Deep learning models, on the other hand, require a vast amount of data to be properly trained. For instance, models like AlexNet, ResNet, and GoogLeNet were trained using the ImageNet database, which consists of over 14 million images. Humans show remarkable talent in learning abstract concepts, which might not solely result from a multitude of cumulative learning experiences. Research indicates that even newborns and infants under the age of one can learn abstract concepts from a small number of examples. However, deep learning lacks this ability compared to humans. Therefore, to solve problems using deep learning, a diverse range of examples representing possible scenarios must be included in the training data. In cases where there are numerous unpredictable factors affecting outcomes, such as predicting stock market fluctuations, achieving high performance with deep learning becomes challenging. Google Flu Trends, created to predict influenza outbreaks, initially demonstrated good performance but failed to predict the emergence of a novel flu strain in 2013. This led to subsequent inaccurate predictions. 

As global interest in AI rises, discussions about AI have extended beyond its technical aspects to encompass social and ethical considerations. In countries like the United States and European nations, AI has been integrated into critical decision-making processes, such as sentencing in criminal cases. The idea is that if all necessary data for legal decisions is fed into an AI system, all verdicts will be fair and unbiased, eliminating human bias. However, holding AI accountable for incorrect data or misinterpreted information that leads to innocent individuals being wrongly punished is a complex matter that cannot be solely attributed to AI. 

Source: World Governments Summit

Can artificial intelligence further advance? 

In order to address the debate and skepticism surrounding how AI results are derived, efforts are being made to introduce the concept of the ‘white box’ into deep learning, making it understandable to humans. The white box concept involves augmenting existing black-box machine learning systems with source code and various factors’ weights to optimize inferences. Another solution being attempted involves human intervention during the model generation phase to guide the model in the right direction. However, these methods ultimately require human involvement and are difficult to apply to numerous predictive variables. 

The mentioned avenues of advancement necessitate human intervention. AI would struggle to make decisions in an ever-evolving environment, not just in a limited setting. The current state of AI is limited when facing new environmental data, as it needs to continuously adapt and evolve its models. It is more of an engineering practice than a science, akin to fixing practical issues on the go. This is why understanding the limits of AI-based decision-making and using AI results as mere supplementary tools or references, while relying on human judgment for actual decision-making, might be a reasonable approach. 

While we might think significant changes are needed to enhance AI’s deep learning systems, we might not yet fully grasp that AI systems are not truly ‘intelligent’ in the sense they behave like humans. This implies that currently, we might not even know what needs to be added to AI systems to make them truly capable of human-like cognition. This raises the question of whether we’ve exhausted the possibilities of what can be achieved through the technology that powers AI today—deep learning. If that’s the case, AI’s progress might be entering another period of stagnation. 

OK AI, please bring me an iPod!

An image of a cat recognized as an iPod

Reference

Machine Learning is Fun – https://medium.com/@ageitgey/machine-learning-is-fun-80ea3ec3c471 

Exploiting the Vulnerability of Deep Learning-Based Artificial Intelligence Models in Medical Imaging: Adversarial Attacks – https://synapse.koreamed.org/Synapse/Data/PDFData/2016JKSR/jksr-80-259.pdf 

Seoul National University Develops White Box Approach for Machine Learning Inference System – http://www.irobotnews.com/news/articleView.html?idxno=15370 

Marcus, Gary. “Deep learning: A critical appraisal.” arXiv preprint arXiv:1801.00631 (2018).- https://arxiv.org/abs/1801.00631 

Greedy, Brittle, Opaque, and Shallow: The Downsides to Deep Learning – https://www.wired.com/story/greedy-brittle-opaque-and-shallow-the-downsides-to-deep-learning/ 

Approaching Reality: AI Judges on the Horizon – http://www.jeonpa.co.kr/news/articleView.html?idxno=59008