Shawn Harry | Is Artificial Intelligence, intelligent?
1784
post-template-default,single,single-post,postid-1784,single-format-standard,ajax_fade,page_not_loaded,,qode_grid_1300,qode_popup_menu_push_text_top,qode-content-sidebar-responsive,qode-theme-ver-10.0,wpb-js-composer js-comp-ver-4.12.1,vc_responsive

Is Artificial Intelligence, intelligent?

Personally I’ve never been a fan of the term Artificial Intelligence simply because in my opinion its not a particularly accurate term. Intelligence requires understanding of consequence, which invariably can only be limited to the realms of human activity or better put human decision making. Animals are also capable of intelligence and like humans are also autonomous, though obviously not possessing the degree of intelligence, accountability and higher learning, for example language, that’s intrinsic to human beings. Definition matters as this ensures people understand the context of what they’re attempting to understand correctly. Ambiguity conversely can lead to various highly subjective and interpretive assumptions that in most cases are divorced from the subject matter in question in favour of marketing, sales or other agendas. This tends to be a popular discrepancy in IT where new terminology or technology is routinely misunderstood, usually because of a liberal use of phrases and terms promoted primarily for marketing purposes.

Case in point AI is a term that has a very broad and loose definition and in some cases can mean many different things to different audiences. Some consider automation, machine learning, smart assistants, search query’s, even heuristics based scanning within security suites like email or anti-virus as AI. But it should be noted that none of the above are new technology’s. In fact a few years ago none of these were considered AI either. So what then is AI? Whilst certainly it is artificial in that its a man made tool and not part of the natural world is it accurate to call it intelligent?

AI in many respects is an algorithm or a bit of code or logic usually designed to support a business process that produces a desired result based on defined input criteria or conditions. At its simplest the code or process is essentially an IF statement. “IF THIS -Condition Exists- , DO THAT -Execute Task”. Once the condition is met the algo or process is able to respond with a result. It possess no conscious or autonomous attributes capable of understanding the consequence of a result it produces and will always be limited to the confines of its code or design constraints. Ergo the AI is only as useful as the infrastructure as well as the developers who maintain the code that make up the AI. There is no code that is perfect nor can ever be perfect thus the implications of code designed to make decisions is considerable.

Another often touted feature of AI is its ability to process large amounts of data via LLMs (Large Language Models). Its self evident that intelligence is far more than the ability to process large amounts of data. What’s definitely more intelligent is the ability to contextualise data, eg to sift out the noise and hone in on the portion of data that is important to the task or output. Clearly this is of greater importance. AI’s attempt to do this via machine learning. Essentially sifting through enormous databases or graphs to match patterns, the industry thought process being the more data and the more compute you can throw at an AI the greater likelihood of a better result or output. This obviously has implications, commercially, environmental, sociological, morally and so on. This is also a highly inefficient approach from an infrastructure as well as code point of view with an indefinite amount of iterations, budget and infrastructure required to reach some apparent nirvana.

The mere fact that an AI is limited to the abilities of the developers and whatever biases they decide to code for is a testament to the inherent limitations of what some think AI can achieve. It is vital in my opinion to seperate this fact from fiction as technology is not a limitless upward curve in progress and requires sensible guard rails of responsibility while also ensuring the tool is fit for purpose. In that respect its merely just another tool – no different to a hammer or spanner – used by whomever is wielding said tool to achieve their objective. Achieving both those objectives will prove to be difficult and even more so because of the huge amount of vendor marketing behind AI to boost user adoption.

From my personal experience I’m yet to see really sound use cases that solve a genuine problem or business need. Most users from what i can tell are using AI as a glorified search engine, or simply to create images or maybe within productivity suites like M365 for transcription, annotation and task management, a pseudo PMO/PM if you will. Again none of the aforementioned are new, none of them required AI in the past, and none of these solve real world problems. Regardless the question still remains what is the measure of productivity and output now that the AI pixy dust has been sprinkled on these technologies? Are we actually more productive as a result? And can this new found productivity be empirically measured, along with its touted improvements and output, compared prior to the introduction of AI? In other words as an industry how does a consumer cut through the jargon and marketing to understand what really is the benefit of using AI to them personally or their business.

Concerning data which is where AI is supposed to excel im yet to see any business using it in anger for any kind of serious analysis as this requires truly pristine data sets to ensure an accurate output. If the inputting data is inaccurate then obviously so will the output. So unless AI develops the ability to miraculously cleanse data, of which data cleansing is no trivial or small feat, the application of AI to this process ironically introduces more problems. Case in point If CoPilot, ChatGPT and others are anything to go by this is going to be a serious problem in years to come as the current crop of AI’s are incapable of producing accurate outputs for some of the most simplest query’s. In fact the current AIs are producing voluminous amounts of inaccurate data that are littering the Internet already. When inaccurate data is used to create inaccurate outputs its self evident the consequences could be serious and even more so when AI is being used to make decisions.

Within the industry there seems to be an implicit assumption that AI can only be a force for good. The fact it has and will always have inherent limitations due to its lack of intrinsic intelligence (you cant code consciousness, free will, sentience nor morality no matter how much fans of AI would have you believe, of which none of the aforementioned can be decoupled from intelligence anyway) as the affects of an AI are clearly bound sociologically to our experience as humans will always make its practical application problematic. On that basis, and taking into account how nascent AI still is along with the existing problems that seem to be getting little attention, again i ask you how intelligent is artificial intelligence really?