The Dolphin and the Monkey – Using Human Intellect in the AI age
Published Jan 25 2024 11:55 AM 2,312 Views
Microsoft

The Monkey and the Dolphin

It happened once upon a time that a certain Greek ship bound for Athens was wrecked off the coast close to Piraeus, the port of Athens. Had it not been for the Dolphins, who at that time were very friendly toward mankind and especially toward Athenians, all would have perished. But the Dolphins took the shipwrecked people on their backs and swam with them to shore.

 

The monkey riding on the back of a dolphin to shore.The monkey riding on the back of a dolphin to shore.

The Monkey and the Dolphin - Read.gov

 

Now it was the custom among the Greeks to take their pet monkeys and dogs with them whenever they went on a voyage. So when one of the Dolphins saw a Monkey struggling in the water, he thought it was a man, and made the Monkey climb up on his back. Then off he swam with him toward the shore. The Monkey sat up, grave and dignified, on the Dolphin's back.

 

"You are a citizen of illustrious Athens, are you not?" asked the Dolphin politely.

"Yes," answered the Monkey, proudly. "My family is one of the noblest in the city."

"Indeed," said the Dolphin. "Then of course you often visit Piraeus."

"Yes, yes," replied the Monkey. "Indeed, I do. I am with him constantly. Piraeus is my very best friend."

 

This answer took the Dolphin by surprise, and, turning his head, he now saw what it was he was carrying. Without more ado, he dived and left the foolish Monkey to take care of himself, while he swam off in search of some human being to save.

 

Our goal as operators, no matter our discipline, is to find ways to establish trust in AI outputs but not as we did in the past with previous computer systems. Instead, we should develop and use frameworks for conversational understanding that drives clarity resulting in trust that facilitates any workflow or job.

 

We are operators

How might the dolphin have saved itself a bit of effort under the burden of a monkey on its back? Certainly not through its line of questioning, which was predicated upon past experiences and expectations.   The default behavior of anyone operating in a system of familiarity is a framework of understanding and ultimately that of trust.  What the dolphin later demonstrated, or how it should’ve questioned the monkey, is that every conversation is a continual affirmation of trust – you either gain clarity on information presented or you continue to operate on acquired knowledge, past interpretations, and experiences.

 

Like the dolphin, we are all operators, executing against different jobs we are quite good at be it securing networks, analyzing data, architecting or maintaining systems, policy configuration, or facilitating operations of people.  And like the dolphin, precedence and experience over time establishes expectation – there should be a Greek in the water, this process is invariant. Our position as operators mirrors that of the dolphin in other ways too; we anticipate a superior intelligence akin to the human when working with AI systems. Just as the dolphin unquestioningly trusts the nature of the outputs on its back, we often trust and expect more from AI systems beyond their actual capabilities.  There are further considerations, such as another form of conditioning we have long been exposed to, knowledge and behavior of computer systems. Their programmatic and rigid nature with discrete outputs, is an experience that is ingrained in us, along with our expectations. It has long-defined outcomes we trust.  Yet we’ve upended that paradigm in the AI age.

 

Our interactions with computer systems are now conversational, and like any conversation, the ebb and flow stem from our interactions.  A conversation repeated one hundred times will vary between both questions and answers, fostering disparities in outcomes.  In any conversation, we can elect to trust prior conditioning or inquire and verify before trusting any statement and subsequent outcomes.  Accordingly, we can opt out of the conversation at any point be it a natural conclusion or an interaction that violates trust or expectation.  The outcome is our choice at any point, the (computer system, copilot, or person) is just along for the ride. Whether you are operating the conversation with, or building the methodologies and interactions for the AI system the core objective remains the same – establish a framework of trust.

 

Human intellect and frameworks to define trust

Our goal as operators, no matter our discipline, is to find ways to establish trust in AI outputs but not as we did in the past with previous computer systems. Instead, we should develop and use frameworks for conversational understanding that drives clarity resulting in trust that facilitates any workflow or job. If we peel back and define what trust is, it’s a cognitive capacity based on critical thinking and decision making, with cognition simply being the process of acquiring knowledge.  Our intellect is our ability to apply knowledge in an adaptive situation.  That is, in essence, what a conversation is, adapting to information while referencing or validating with knowledge we can access (be it our brains or secondary knowledge sources). And thus, the basis for contextualizing interactions with AI systems should emanate from mechanisms to acquire, process, and reference knowledge against any information or output.

 

There are numerous emerging frameworks for AI systems whether it’s to enhance security, define responsible operation, or establish mechanisms for coexistence within society.  All of which are designed to help define platform or system level parameters of operation and outcomes. While those are critical for overall design, the onus of how to perceive, action, or trust AI system outputs is still left to the operator. If we apply a framework of adaptive knowledge acquisition to conversational AI systems, each interaction can operate in some stratum of trust. A framework could look something like this:

 

  1. Define the system (the boat, human, and shore) – Understand the LLMs, architectures, inputs, and anchoring that define the mechanisms for an AI system to produce responses. The response or context could be derived from grounding data, training data, any retrieval augmented generated (RAG) process, orchestration to incorporate data from plugins, or a combination thereof.
  2. Source the information (Athens and Piraeus)  – Establish the extent of information retrievable by the defined system architecture. Specify the sources of the system knowledge, or the scope of operator knowledge to gain, understand, and verify.
  3. Validate responses (confirm citizenship or preference for bananas) – Check the accuracy of prompt responses, results, and any conclusions against the sources of information. AI systems with great responsible AI principles will cite information provided and substantiate opinions and conclusions.

No matter the methodology the above or your own, establish a mechanism to leverage your intellect to vet and validate the outputs of AI systems. Experiential trust of outputs can be attained, but not without intellect.

 

Enabling human-AI interaction

The other side of leveraging human intellect for interactions with AI systems is the facilitation of trust frameworks (or building the sailor dolphin retrieval system).  In such a design, the ambition is to arm the operator with mechanisms to anchor their knowledge to the system while enabling a means to validate any output.  There are lots of different elements to this, the full extent of which is outside the scope of this writing, but as before there are core concepts to impart:

 

  1. Responsible AI (parameters for sailor retrieval)– Defining training mechanisms, LLMs in use, and architectures that provide reasoning or retrieval capabilities.  Articulating data policies, training boundaries, and system learning capacities that could impact response drift over time. Finally providing transparency into orchestration mechanisms or architectural based decisions such as plugin selection.
  2. Citation (ports of call) – Any data source leveraged in RAG processes, plugin selection and execution, or selected from knowledgebases should be cited as links.  Enable user source validation to both qualify the data selection but also foster understanding of content assembly for improved prompt formatting.
  3. Qualification (criteria for citizenship)– When substantiating opinion, arriving at conclusion, or providing direction, qualify the methodology supporting the outcome.  Build the process with the user, enable them to arrive at a decision or outcome in conjunction with the system.  Do not impede judgement or impair the relationship between the trustee and trustor.   

 

The strength of the framework facilitating trust directly correlates with usage of an AI system.  User trust is predicated upon sustained experiential trust established from interactions over time. Conversations in (or out of) AI systems necessitate a user to swiftly trust and adapt to outputs that inherit prior interaction and perspective.  Citation and qualification influences cognitive trust, the absence erodes it, and when inaccuracies occur it’s the difference between whether a system is deemed reliable.  In conclusion, work to establish trust through transparency and anchoring to the knowledge, or acquisition of knowledge, of the operator.

 

Swimming to shore

Whether you are a designer or an operator, AI systems will always have a level of interaction with human cognitive assessment and input.  Leverage your human intellect to inform a decision process and assessment of an output from any AI system, for if not you may just find you have a monkey on your back.

2 Comments
Co-Authors
Version history
Last update:
‎Jan 25 2024 11:55 AM
Updated by: