top of page

On Artificial General Intelligence and Agencies

Writer's picture: Michael L AndersonMichael L Anderson

Michael L Anderson

Black Oak Engineering

New York USA

ver 0.4, 01 Feb 2025


There is a common view that if an Artificial General Intelligence (AGI) is trained broadly enough it will subsume and surpass the mental capabilities of humans. I disagree. It’s not even close. The necessary missing ingredient is agency, and specifically the multiple, often conflicting, agencies that animals coordinate in their careers through the survival landscape. By agency I mean the directed, purposive behavior that animals exhibit in the course of their lives, often in opposition to other entities, and often requiring the cooperation of others like oneself. Even simple animals host multiple agencies, however. To eat, seek an optimal environment, avoid danger, and to reproduce, at a minimum. That last one is often at odds with the others, and often fatally. Social animals add new agencies: to perform one’s role in the group, to communicate news, etc. Juggling these different priorities in an ever-shifting opportunity landscape requires centralized cognitive control, but even this is conflicted, for no one of the priorities is absolute. Serving multiple dynamic agencies composes the ‘what it’s like’ of our existences.


By AGI I mean broadly the spectrum of cases which purport to have intelligence comparable to our own, though they differ quite a lot in approach. The general view is that, if we just keep adding servers, total parameters, training experts, bandwidth, etc., we will inevitably exceed human capabilities. Just look at the specs.


Writers such as Hans Moravec, Andy Clark, Alva Noë, Kevin J Mitchell, and Eric Schmidt, have touched on similar themes. Sometimes the emphasis is on robotic implementations or on ‘embodied’ intelligence, whether human or artificial. I wish to add to this tradition. Intelligence is as intelligence does, one might say. Pure abstraction, intelligence-in-itself, is worth little, if this term means anything at all. My immediate concern is that the recent showy successes of Big AI (e.g., ChatGPT, DeepSeek1 ) may distract one from the more prosaic work of fundamental AGI. Also, I wish to discuss how self-driving vehicles and military drones align with my arguments. I argue that they will never be permitted to develop as necessary for them to become full AGIs. I will discuss the essential and striking ontological differences between humans and AGIs. Lastly I sketch a dynamical systems model of how, I suggest, animals of all degrees of intelligence operate.


Where it’s at now

AGIs can be narrowly trained to win games of chess, go, and quiz show competitions. They can identify anomalies in radiographic imagery better than human clinicians, and without the conflict of interest. They can hypothesize complex molecular biological structures and interactions. There are self-driving cars, which, while not perfect, far exceed the safety-mindedness of human drivers2. There are early-stage autonomous combat drones. Generative Large Language Model AGIs can produce fakes that fool humans. Indeed, in some creative ventures, the artificial output is ‘as good’ as human outputs as judged by experts3 in the field. These skills are, in a narrow sense, an agency, as I define that. A computer’s hardware and software are saved from the scrap heap as long as they perform well in their specific task. But an AGI’s agency is dedicated solely to one purpose and is completely dependent for its existence. It has no existential grounding or freedom.


We are in murkier waters when we look at an AGI theorizing over mathematical proofs or theoretical physics. Few would accept these results without the blessing of traditionally-intelligent humans. As long as this restriction is stipulated, an AGI is nothing more than a heuristic, as when the four-color problem was computer-proved. Trust but verify.


Curiosity

There appear to be two different types of criteria to assess that we may have an AGI:

  1. Variations on the Turing test. In retrospect these seem too lax. Humans as judges are too gullible, at least when it comes to the pathetic fallacy. Big AI definitely thrives on this gullibility. ‘Social robots’ also, those which are designed to look approximately human, take advantage of this. It is remarkable how we are convinced by the gestures of some motorized facial actuators. Of course, a philosopher with a naturalistic bias will say, but what are our human emotions but a set of grimaces and winks?

  2. Practical tests involving sensors and actuators, such as the ability to assemble random sets of Ikea flat-pack furniture. These are more in line with this paper. Remarkably, we are far from proficient here at such tasks. While such a robot, in the original Czech sense of ‘slave-laborer’, will definitely have a huge economic impact, this does not strike me at all as generally intelligent.


I suggest that a better criterion to assess an AGI is curiosity, the desire to learn more about the world and its deeper principles. Often just for the joy of learning. Curiosity can require a huge expenditure of resources that could be used elsewhere. It often fails to pay off or leads to aporia. It can be deeply asocial and subversive, almost maladaptive.


It is not at all clear how curiosity would be directly programmed into an AGI. Add RTOS task: ‘Dedicate 5% of your waking time to ask about some odd phenomenon that caught your eye.’ – that’s obviously too vague. Curiosity may be an emergent property. It would presumably obtain if we somehow manage to introduce the necessary basic preconditions, but what are those?


How do we know that an AGI has real curiosity? This is a problem even in the human world, in academia. Are academics curious about he world? For the past hundred years or so, their metric for success has been the sheer number of published papers and, hopefully, the citation reach of a key paper. So academics have gamed the system. They publish wantonly. Form presides over content. If the content were truly interesting, i.e., if it appealed to the curiosity of others, it would be much more generally popular. So I guess that is what to look for: an AGI who publishes out of the blue a set of fundamental papers such as those of Einstein in 1905, or the works of Wallace and Darwin, or a doable experimental regime for verifying superstring theory. Something grounded in empirical reality, yet theoretically bold and unifying. This would be a sufficient condition for AGI. Yes, this is a high bar. Few humans would meet this challenge. It is not clear that a lower bar has real meaning.


Freebots

I use the term ‘freebot’ to refer to a hypothetical robotic AGI that is allowed to freely roam the wild, do whatever it must to survive, perhaps to learn, perhaps to develop curiosity. Freebots would almost certainly be many; one alone has little motivation to do anything but sit and charge its solar panels. Freebots are social creatures like honeybees, herring fish, and primates. If they have the ability to envision risk scenarios and to take creative steps to mitigate the risks, they must be deemed intelligent. With humans these strategic responses are often in the context of warfare. If group A does not develop defensive technology, group B will simply take their things by force; A will suffer if not perish altogether. I do not see that this aggression must be the case, however. This may be an accident of biological evolution. Freebots could be entirely reasonable and cooperative amongst themselves, and also develop technology. Nonetheless, they will still need to deal with humans, who are unquestionably aggressive. So the freebots will need to develop defensive technology or the ability to flee humans. This of course is why we do not allow freebots. Way too dangerous, and to no human’s benefit.


I see no obvious reason why highly intelligent freebots could not arise, if given a chance. This would, I suggest, require the varying and often toxic mix of predispositions that characterize human beings. But convergent evolutionary faculties are fairly common. The underlying physics and situational problems are the same. If several wildly different anatomic structures, poorly suited for change, can slowly adapt into useful wings or eyes, why not so with cognitive ability? And presumably at a much better engineered pace.


Conversely, a good way, perhaps the only good way, to understand what we call human consciousness would be to construct freebots that flourish and think. Then we would need to be able to catch one in the wild and analyze its hardware and software. Frankly, it is doubtful that we could completely reverse engineer this. Ethically we are on quite thin ice. But even so, if the Gedankenexperiment is permitted, we would learn something of ourselves in the process. Did we require some irreducible elan vital?  A soul? Quantum nanotubes? Some emergent precursors?


What are my agencies?

As a typical social animal, they are myriad. Here is a far from exhaustive list:

  • Transmit my selfish gene.

  • Preserve my life in the face of danger.

  • Maintain my life in the face of hunger.

  • Provide for my family.

  • Love my spouse.

  • Advance my professional career.

  • Serve my country in time of war.

  • Serve my corporation or political party.

  • Promote my ‘people’s’ interests, variously defined.

  • Promote my favorite ‘cause’.

  • Follow my favorite sports team.

  • Seek the truth about the JFK assassination.

  • Learn a difficult skill over many years.4

  • Tell stories, gossip, and gripe.

  • Beat my competitors, just for the sheer joy of it.

  • Obey the laws of my jurisdiction.

  • Play, dance, and have fun with others.

  • Exercise, even when it hurts.

  • Obey religious precepts.

  • Honor and seek a personal mental relationship with the deity as I understand that.

  • Seek social reward and advancement.

  • Heed irrational and self-harmful impulses.


Instead of agencies you could call these personality traits, compulsions, neuroses, virtues, vices, or drives. They are directed behaviors. They can be highly abstract and symbolic. One might call them ‘programs’ that run in one’s brain. There must be a coded sequence in my brain that recognizes a situation (here is a group of willing listeners and a decent piano), an ability (my fingers remember how to play the Moonlight Sonata; my feet remember how to work the pedals; my eyes remember how to read the score), an emotional earnestness (I need to push and pull; I need to tell the musical story, but not stray too from the score). We have today only the roughest understanding of how and where this is coded in the brain. But it is certainly reasonable to assume that coded it is.


The dynamic mix itself seems to be essential. I am not unitary, except nominally. I discuss this further below.


Our conflicted phenotype

Everyone’s balance of priories will differ, and this will largely depend on their individual genome. The balance will evolve over time. It is situationally dependent. The opportunities and exigencies pop in and out constantly in an ever-shifting landscape. It requires our complete attention and can be mentally exhausting. Moreover, we are generally expected to ‘make our own luck’ in this landscape. That is, to carefully lay the groundwork and to toil daily toward an ambitious if ill-defined end.


Humans have a rich set of conflicting agencies, but by no means do other animals have it completely simple and easy. A cockroach or mako shark share the same basic survival and reproduction problems to deal with; a troop of chimpanzees shares many of our more advanced and pernicious traits.


It is a melancholy observation that it is probably necessary to host such a heterogenous genotype, with its mixed and conflicted results. The diversity optimizes the chances that some of us will survive extreme conditions, such as civil war or emigration. The ability to disengage one’s emotions and empathy may be boons in times of crisis; in peacetime they may be sociopathic. There are outlier individuals who carry this too far, and they often become criminals. The overall aggressive tone of human phenotypes is arguably necessary to pursue scientific knowledge and self-examination. Sheep may be peaceful, but they are not industrious.


It is not difficult to enumerate the possible conflicts. Most combinations of agencies are incompatible. I may need to suppress my fanaticism for a football team to make peace with my spouse. I definitely need to push everything else out of my mind if I am confronted by a mugger. I need to ‘pull myself together’ at work if my habits are noticeably sloppy due to distractions.


Game theory

From the point of view of each individual agency, it is in game theoretic competition with the others. To the extent that it is ‘serious’ about its mission, it will demand time and resources at the expense of the others. For a considerable majority of us, one of our ‘passions’ will prevail. We will die in a war that we could have fled, or we will obsess over a love gone wrong.


My existence vs an AGI’s

The essentials of my human situation are that:

  • I am but one of many.

    • In contrast, an AGI is trained as a ‘one from many’. And this is done in a carefully supervised top-down ‘engineering’ way. There tends to be a client-server model. ChatGPT  runs in the cloud, and I may access it.

  • The others out there are different in many ways, but I’m never entirely sure how.

    • The source code and training methods of an AGI are, in principle, open to the developers. Of course, software of even modest complexity quickly eludes the comprehensive understanding of humans.

  • I ‘emerge’ ad hoc from the wider system. Heidegger would say, ‘I am thrown.’

    • An AGI is engineered, carefully developed and trained in labs and cubicles using standardized practices

  • Failure is not an option. I can throw away my life to slow dissipation, but I cannot easily decide to stop breathing or eating. I am constantly ‘driven and derided by vanity’.

    • To program an AGI with a compulsion, or even a bias, toward self-preservation risks venturing into the freebot domain. A remarkably high percentage of highly intelligent robots are weapons, such as cruise missiles, intended for self-destruction. They do not protest.

  • Success, or perhaps ‘victory’, is not easily defined. Presumably it is an optimization of my various agencies: I am healthy, rich, beloved, smart; I have healthy children; I converse regularly with God. But it will never be objectively clear what success is when the parameters are orthogonal. If I have everything material-wise but lack children, am I complete? Have I failed?

    • The success of an AGI is fairly straightforwardly measured by its technical parameters. A self-driving car traverses an average of twenty thousand urban kilometers between incidents. A chat engine can write essays that pass Ivy League acceptance committees. Etc.

  • I am mobile. I can usually ‘vote with my feet’ when present circumstances are too harsh.

    • It is noteworthy that the two examples which, I believe, come even close to becoming an AGI are self-driving vehicles and combat drones.

  • I am independent. At some point no cares much how I am doing.

    • In contrast, an artificial intelligence is constantly groomed and edited by its custodians.

  • I can condense my survival experience into my genome, so to speak, in a non-Lamarckian way. The genome does not accumulate specific knowledge, except perhaps in edge cases, but it does preserve and foster the ability to survive and navigate the world. This gets passed from generation to generation, sometimes with random mutations that may occasionally enhance the specie’s probability of survival.

    • There is a certainly an aspect of AGI development that is ‘heritable’. We try new approaches, discard some, stick with others, often with little justification. Freebots would presumably condense useful lore into encyclopedias that could be transmitted to descendants. But this abstract knowledge would be useless without practical experience. What must be preserved is the toxic mix, the drives and compulsions, the mechanism to continue the drama through the generations.

  • These are my agencies. I own them. I am responsible for them. Even if someone else is the benefactor, even if I may forfeit my life, it’s my call how they are used.

    • AGIs often use the personal pronoun. It’s not difficult to program. Interactive robots can flinch when approached too quickly. Their responses are generally written to be, in a positive sense, self-serving. This pertains to small subclass of AGIs and robots that are specifically intended to entertain humans.

    •  One is always conscious that information shared with an AGI is also shared with Alphabet, Amazon, Meta, OpenAI, or whoever sponsors the AGI. It is like an aggressive journalist who charms her subject into sharing a little juicy gossip, ‘just between us two’. The concerns are not proprietary, not shared between two morally equivalent beings. They are public. They are commodities.

  • My ken is just that, my personal scope of understanding. What I have been able to figure out in the course of my life, more or less.

    • The AGI simply taps into a huge collective database in the cloud. It can presumably access the most arcane historical data imaginable. If asked to generalize, it will give a ‘consensus of human experts’. However, for complex phenomena such as the causes of the First World War or the Great Depression, the ethics of abortion, the interpretation of quantum mechanics, there is no consensus. There are various points of view with varying degrees of proficiency; all are tendentious. Human points of view change radically over time, largely based on shifting value systems. A truly intelligent AGI would develop its own point of view.

  • My ‘identity’ is not invariant over these agencies. Indeed, a common identity, a self, is barely recognizable as a unitary thing. The only thing binding them together is their mortal host, me. Me is just a farrago of traits with the label ‘me’ on it.

    • An AGI has a well-defined identity, in this sense. It is the thing that passes a Turing test, or the Ikea test, or the thing that writes prize-winning Hollywood screenplays. And this is all it is. It could be programmed to have other pursuits, but why would we do that? A self-driving car that spontaneously philosophizes on Buddhism, even if it did both well, would be a platypus.


Something like AGI: military robots

Military and law enforcement robots are a middling case, and the one that I feel is most likely to approach real AGI. We have already seen the early stages. Flying and submersible drones have featured prominently in Ukraine’s defense against Russian invasion. These drones are human operated presently, but we can easily foresee a shift toward autonomy. Since communications is generally challenged in the war zone, and kamikaze pilots in short supply, the drone can easily be given the ability to recognize threats and act accordingly. As the opposing force develops clever decoys and countermeasures, the drone’s cognitive abilities must improve, but these are just engineering problems. Its mission is generally suicidal. It will take risks, presumably unquestioningly, that the military finds too difficult to assign to humans. Quite likely, this risk will be to fight opposing drones, ones that can easily withstand several g’s of acceleration, large pressure shock waves from detonations, or other such factors that are lethal to humans. Is there a point in the evolution of this suicidal drone that it should be called an AGI? Not by the criteria I have given. If this thing is easily distracted by a drone of the opposite sex, or pauses like Wilfred Owen to pen verse, it will be deemed unreliable by its military program office and discarded. It is a sophisticated tool, not a being. Its targets must be carefully designated by humans. There is generally little tolerance for friendly fire incidents. Even if the loss of common foot soldiers is deemed acceptable, the leadership will not allow5 themselves to be accidentally targeted. So controls will be imposed, and these are significant existential limitations. The drone will not be permitted autonomy. My whole argument is that, without autonomy, there is no true intelligence.



Not quite AGI: self-driving vehicles

Designing a vehicle to traverse from point to point traversal is fairly straightforward. In principle. We need GIS to update location and to track the system of roadways. A Kalman filter helps to maintain the integrity of the observer model. We need to obey the speed limits and other traffic regulations. Sensors such as cameras, radar, and lidar keep us inside a safe lane, and also to respond to encroachments into our ‘air pocket’. While not diminishing these daunting engineering tasks, there is no magic here, just the coordinated work of thousands of talented engineers. Tesla and other vehicle companies tend to use derivatives of Linux as an OS to architect this task. An amazing accomplishment before which I bow humbly, but in the end just engineering. The self-driving vehicle is, in short, not highly ‘intelligent’, despite its remarkable capabilities.


What other kinds of problems would we ask an AGI to solve?

  • Highly integrated quantum cosmological models with experimental predictions.

  • Fiscal policy of a nation state.

  • Arbitration of an international trade dispute.

  • Adjudication of civil law actions.

  • Compelling explanations of such shopworn but persistent terms as ‘consciousness’, ‘free will’, or ‘causality’. ChatGPT and DeepSeek will regurgitate the usual platitudes. What we need is for an explanation with the plangency of Newton’s laws of mechanics. Just learn them, learn how to apply them in a some real world cases, and it will be obvious to anyone of reasonable intelligence that this is the preferred explanation. Anything else is a waste of time, mere poetry at best.


A model of dynamical opportunity optimization

Mr Fox crests a hill one day in May. Just over the top he sees a vixen, Ms Fox. They lock eyes. They catch each other’s scent. Hormones gush. Love. His brain goes into gear. ‘How do I woo this fascinating vixen?’, he thinks, as it were. ‘Is he the one?’, she asks herself. Animals have various mating rituals. But they are all patterns of novel behavior that must be executed in time. These must be computed. They must terminate upon various criteria, with success or failure. There is a judgmental aspect. Not every opportunity leads to mating.


Later Ms Fox crests a hill to discover a rabbit upwind. Slowly, stealthily, with excruciating patience, she creeps up on the rabbit. Mind the fickle wind. She angles over to approach between the bunny and the brambles, where it probably has a hole. At the last possible moment, she pounces.


Mr Rabbit had been wary, his ears quivering, but the spring grass was really tasty. That fox came out of nowhere. His adrenaline surges. The brambles are blocked. If I zigzag roughly in that direction, I might make it.


Reproduction, hunger, and danger, are three orthogonal poles of existence. Many others exist, of course, depending on the creature. The opportunities rarely occur simultaneously, fortunately, but they all require a pattern of response. One’s most sophisticated and high-priority cognitive processes must be used. There will be physiological changes to support this. Much depends upon it.


As a dynamical system we may represent the poles as orbits in phase space. These are strange attractors. The trajectory is driven by the particulars of the encounter, and the computation that occurs in the host.


There is a quiescent, low energy orbit in which the creature spend most of his or her time. And there is a constant, parallel homeostasis process running. This interacts with the dynamical system. For lack of better terms, one may call the low-level processes ‘sub-conscious’, and the dynamical system ‘conscious’.


Natural language

Humans, far more than other species, add another aspect, which we are quite fond of. We can articulate meta descriptions of our environment. We have developed language to an extraordinary degree. This includes everything from gossip and stories to the Langlands program. How exactly this meta corresponds to the world is, to say the least, ‘beyond the scope’. But it is prudent to discuss the place of language in the present dynamical systems model. Interestingly, language originates from the low-level processes, the ‘sub-conscious’, as does the ability to play a musical instrument or to sketch a figure. Language, the use of structured symbols, allows us to, inter alia, greatly improve our survival odds. If I can tell the tribe that I saw rain clouds in the westward distance, so there is probably game there, we all benefit. If I can discuss how a certain type of metallic stone may be baked in an oven, with air constantly blown in, we can obtain a superior spear tip. If I can analogize the falling of an apple with the ‘falling’ of the moon, we can start to understand gravity.

And, of course, if I communicate with a Large Language Model, I can feel strongly persuaded that I am having a sincere, authentic conversation with someone who listens to me, and can provide cogent explanations of complicated world events, so must be highly intelligent. Perhaps our gullibility comes from the many generations of reliance upon language as a vital social bond. And between humans, we can usually detect when someone is being disingenuous with us. We are finely tuned to the cues. Not so with electronic communications. Language is never the thing, but at best a fragmentary reflection of the thing. Professional wordsmiths and tech billionaires tend to forget this.


Michael L Anderson, manderson @ blackoakeng.com, holds an MSEE and an MS Physics. He served in the USMC. He now works as a consulting design engineer and business executive in New York.


1Two inescapable aspects of the recent fuss over LLMs are:

  1. These are associated with the entertainment industry, broadly defined to include social media, which is huge, but which by definition is not ‘serious’.

  2. The sheer magnitude of the US stock market that can be thrown at this, which has but little to do with its intrinsic merits. And counterpoised to this is the well-funded CCP’s desire to show up the west. This need to ‘show off’ has little promise for sustained support; the investors will move on.

2In the US, and perhaps elsewhere, the problem here is a legal one. While robotic vehicles are far safer drivers, there is less societal tolerance for mishaps. The parent company may be sued inordinately for damages. Whereas, with human drivers, many of whom are borderline homicidal, the insurance companies bear the material cost, and this imposes limits. Another problem is that it is quite easy to remove the restrictions on the vehicle’s operation. We allow them to flagrantly violate traffic laws. In a more perfect world, each vehicle would obey the local traffic laws, and everyone would reach their destination safely and, in the aggregate, with minimal time.

3The celebrated Hollywood screenwriter Paul Schraeder described recently how ChatGPT was able within seconds to produce first drafts of screenplays that were, in Schraeder’s estimation, as good as any human’s. This could also be done in various styles as well: “...Paul Thomas Anderson. Then Quentin Tarantino. Then Harmony Korine. Then Ingmar Bergman. Then Rossellini. Lang. Scorsese. Murnau. Capra. Ford…”.

4I suggest that this is the hallmark of meaningful ‘free will’. We spend a lot of time discussing the volition of acts such as extending my arm, of how the physical action often precedes the mental state. This is interesting, but I think it more interesting to ask how an individual executes an extended life task in the face of so many short-term obstacles and distractions, and long-term ennui.

5It might be argued that nuclear weapons are a counterexample: everyone is in peril if they are used, and political leaders have certainly not refrained from developing them. But they have not been used after Hiroshima and Nagasaki, thank goodness. A hypothetical autonomous freebot AGI with such strategic destructiveness would have to be released into the wild, and its very power risks friendly fire incidents. It would be suicidal. Moreover, unused they have little deterrent effect, because there is no way to test and verify them. We cannot obliterate a Pacific island with an AGI. Early prototypes will fail badly.





0 views0 comments

Comments


bottom of page