Artificial Intelligence
An entity created and feared by humans, and one in need of a serious awareness campaign.
If you subscribe to any Daily Briefings, at least one story each day is likely delivered to your inbox on the most recent Artificial Intelligence developments. A simple google search of “AI” presents different headlines every hour. At the University of Virginia, rats can now drive tiny cars, and apparently find it relaxing. MIT researchers celebrate a massive breakthrough on a robot that can detect five-thousand different scents, but the NHS is under investigation for an algorithm meant to prioritise patient care that was found to discriminate against the black demographic. Another article reads, “AI apocalypse: Artificial Intelligence will now be ‘weaponized’ warns expert.” The beauty of having access to all this information is a result of data-sharing, and more broadly human technological innovation, but simultaneously these advances scare the shit out of people. The bias against AI – which seems to be caused by a general lack of familiarity and understanding – creates sentiments of privacy loss and an anti data-sharing backlash. You know the type: tape over their front camera to stave off unwanted monitoring, or refusal to fill out the forms for WiFi access because surrendering personal demographic data isn’t worth the free surfing. This phenomena lends itself well to an overused colloquialism: You can’t have your cake and eat it too. Do humans want cool tech to show off (and use only to an extent that is comfortable for us), or do we want to continue along the lines of impressive advancement to design stuff that will eventually surpass our own ability?
A senior member of the Royal Engineering Academy recently gave a public lecture at UCL to address this exact paradox. In 1970 the creator of Intel, Gordon Moore, created a law to speculate the overall processing speed of computers. He recognized this speed had doubled every year since the invention of the computer, and correctly theorized the trend would continue. Balancing this exponential growth with the more linear tract of human maturation becomes more relevant every second, especially as 90% of internet data has been generated in the past two years. Building trust around data-sharing and raising AI related awareness is a major issue; an issue that if addressed could have significant benefits across many fields – think not only medicine, transport, technology, and more efficient production, but education, outreach, climate, and public safety. This intersection between human psychology and technological development is well illustrated with a lawsuit against one of America’s favorite department stores: Target. Target was using consumer data and computer algorithms to optimize their advertising. They discovered that pregnant women had the potential to be some of the most loyal customers. Target’s algorithms would track consumption patterns correlated with pregnancy and proceed to send tagged women catered advertisements (prenatal vitamins, health foods, maternity fashion, and baby products) over the course of their trimesters. A genius use of data, but not quite refined enough. An enraged father sued the company for sending these advertisements to his fifteen year-old daughter, only to find out a few months later that she was indeed pregnant. Clearly, human interactions are far too complex for artificial intelligence platforms to match, and there are instances where data and information is better kept private; however, addressing these disparities through education and cooperative discussion as opposed to rage and fearful ignorance has the potential to allow humans to innovate in ways that are comfortable for everyone.
This being established, it should be made clear that there are two types of Artificial Intelligence. The first, and the one responsible for the fearful hysteria, is called General Purpose. This is the type that is capable of independent commonsensical decision making that operates similarly to the human brain. General purpose AI is still in very experimental stages of development. Despite popular misconception, software and mechanical engineers make it clear that this level of AI is still twenty to twenty-five years away from being a reality.
The second type is known as Narrow. This kind of machine learning AI is capable of performing tasks without constant and explicit instruction, or coding. Narrow AI is responsible for anything from cultivating your social media feeds, to the aforementioned robot with a fabricated olfactory system, or the flawed NHS diagnostic platform. Think of narrow AI like a savant of sorts. The programming for this technology is so targeted and niche that it is capable only of its specialization, not of any dystopian robotic takeover. Optimal performance is contingent on access to adequate data pools – the vaster and more updated the data the better AI operations are carried out – and this is where some members of the public shut down to exchange large-eyed and disapproving remarks. However, given that Moore’s Law still applies today, the issue of data sharing as it relates to artificial intelligence will not and cannot be dismissed.
It is human nature to be afraid of the unknown. Entities that are complex, foreign, and especially those that are out of our control are naturally less appealing to us. It doesn’t help that the number of data scientists among us, though growing, is only a fraction of the majority of the population which is unacquainted with AI. I myself cannot deny the overwhelming feelings of inadequacy and intimidation as I attempt to better understand this science. So, in order to catalyse the spread of AI understanding, it may help to consider other implementations of data collection and use around the world. In some sense, humans have been collecting data on each other and their surroundings even before the earliest form of writing was discovered. Our ability to accumulate knowledge is what differentiates us from other animals. Artificial intelligence is being used everywhere, for example, groundbreaking climate research has been done in Imperial College London’s labs for years now. On October 9th, 2019, an Israeli chemist with a PhD from Cambridge University presented a report at the Imperial Lates Science event on her successful implementation of a self-sufficient, zero emission technology system for developing communities. Although this type of narrow AI may be received more comfortably with the general public, the basis for these developments are the same as any. Access to data, engineering and programming technology, and the implementation of a specific product in attempts to optimize something is routine procedure.
Despite the benefits associated with AI, there is no doubt that the field is in serious need of an awareness campaign. Complicated calculations aside, one of the biggest difficulties for engineers is building trust. A constant consideration of professionals in the field relates to reverse engineering what seems like an incomprehensible area of science to make AI intelligible and transparent for the sake of the public. Pretty soon humans will need to make up their mind. Do we want to be proud of our gadgets and select technological feats, or is the international cooperation and revolutionary advances made possible by AI worth becoming comfortable with? I have faith that we can use some of the same platforms that scare us to achieve the latter; after all, someone did coin the wise phrase about not having your cake and eating it too for a reason.