No video selected

Select a video type in the sidebar.

In The News

The Ethical Complications of AI

Tim Huckaby's op-ed delves into AI's ethics, its impact on society, and the need for responsible development. A must-read on the future of Artificial Intelligence.

tim-squareBy Tim Huckaby

Chief Technology Officer at Lucihub | AI Consultant

November 2, 2023

 

Untitled design (34)

My mission here is a little op-ed piece on the Ethics of AI.  op-ed is short for "opinion editorial".  Unlike most of the writing I do where I try to stay objective, this is going to be my personal opinion, my perspective, and my speculations around the tremendous good and significant bad that we may see in the coming years and decades as a result of AI innovation.


In that light it seems ethically responsible of me to write this without the help of ChatGPT.  So, I won’t.  but, when completed, just out of curiosity, I will see what GPT says with a prompt like, “Write me an 800 word op-ed article on the ethics of AI.”  That might be interesting to compare the result of this against what GPT says.  The images in this article, though, were created with AI.


Like generations of “old guys” from the past I feel, mistakenly, like my ethical structure is much more rigid than the generation behind me.  I believe it was Socrates or Plato or Aristotle that said the world was doomed because of the coming generation.  For 2500 years we have been complaining about our kids.  But, darn, I can’t look that up in GPT because of my self-imposed “No GPT help in this article” rule.  So, let me just firstly predict that the world we are delivering to my children’s generation is going to be a lot more complicated than what we are dealing with right now.


My inspiration to write this article was partly a result of recently watching one of my heroes, Geoffrey Hinton, frequently called the "Godfather of AI," on the CBS television show, 60 minutes.  Just watching this soundbite  from the episode will sober you up quickly to the implications of AI technology innovation.


I have been pontificating in keynotes for years now about the tremendous good that is already a result of AI innovation.  And backing it up with compelling live demos on stage. AI is already saving lives and has been for a while.  AI’s bold promise is to solve some really tricky health care problems we have yet to understand at any significant level (like pancreatic cancer or type 1 diabetes).  AI is also already producing tremendous productivity for us.  For instance, just this week Microsoft’s Copilot for the Edge browser magically appeared.  Now, when I get to a web page article with a ton of content I simply type (or say) “Generate Page Summary” and it spits out the “Cliff Notes” version.  It feels like cheating.  It’s intoxicating and I can’t resist it.

 

Untitled design (35)

I have no expertise nor experience in the construction of an LLM.  Nor do I feel like I need to have any.  To me that seems like a solved problem with LLMs offered by OpenAi, Microsoft’s implantation of GPT, the many that Google either offer or help to fund, Meta’s LLaMA, and a myriad of open-source varieties from companies like Falcon, Guanaco-65B, Vicuna 33B, and MPT-30B. Though, I do have plenty of experience in the implementation of LLMs by API calls in application software.  That is where I feel my experience and expertise should lie.


As an example, at Lucihub we are close to releasing the beta of “Butterfly”.  Butterfly transforms your professional video idea into a script, storyboards, and a shot list.  We are using an LLM behind the scenes, ultimately having the AI produce the Assets. This type of software architecture is frequently called “LLM wrapping”.  We are wrapping GPT and DALL-e, but we are using Microsoft’s OpenAI implementation of those services.  Microsoft takes several measures to safeguard Azure OpenAI against morality and ethical breaches.  Azure OpenAI has abuse monitoring which detects and mitigates the “bad stuff”.  For instance, butterfly cannot build you a script for a video on “the tools of terrorism”.  That type of prompt is intercepted at the service level which is what you’d expect of an enterprise LLM like Azure OpenAI.  Ultimately it automatically safeguards us from abuse.  Microsoft also has tools that allow us to assess and enhance the fairness of the prompts we programmatically derive.  But, these tools require developers… people.

 

Untitled design (36)

 

So, my question out of pure naivety is this, “Why can’t ethical and moral structure be built into AI systems?” Why do we weed it out with prompt engineering?  Simply having AI learn from the great writings on ethics and morality of the Greek philosophers I mention above (and many more writings on ethics and morality throughout time) should have a direct impact on an AI system, right?  I fear the answer may be that there is no current motivation or monetary path to build ethical structure into AI systems.

 

 
Tim Huckaby
Chief Technology Officer at Lucihub
 
With 35 years of experience in the technology industry, Tim is a veteran in the field. His expertise spans various cutting-edge technologies such as AI, Computer Vision, Machine Learning, AR/MR, Data Visualization, and Edge computing. His impressive background as a Microsoft Global RD and AI MVP demonstrates his deep knowledge and contributions to the industry. His technical prowess and comprehensive understanding of these advanced technologies make him a valuable member of the team. 

 

 

phone-2

Are you ready to collab with us?

phone-2