logo
episode-header-image
Jan 2023
36m 54s

Dr. GPT

CNA
About this episode

Andy and Dave discuss the latest in AI news and research, starting with an education program from AI that teaches US Air Force personnel the fundamentals of AI across three types: leaders, developers, and users. The US Equal Employment Opportunity Commission unveils its draft Strategic Enforcement Plan to target AI-based hiring bias. The US Department of State establishes the Office of the Special Envoy for Critical and Emerging Technology to bring “additional technology policy expertise, diplomatic leadership, and strategic direction to the Department’s approach to critical and emerging technologies.” Google calls in its founders, Larry Page and Sergey Brin, to help with the potential threat over ChatGPT and other AI technology. Researchers from Northwestern University publish research that demonstrates how ChatGPT can write fake research paper abstracts that can pass plagiarism checkers, and that human reviewers were only able to correctly identify 68% of the generated abstracts. Wolfram publishes an essay on a way to combine the computational powers of ChatGPT with Wolfram|Alpha. CheckPoint Research demonstrates how cybercriminals can use ChatGPT for nefarious exploits (including people without any experience in generating malicious tools). Researchers at Carnegie Mellon demonstrate that full body tracking is now possible using only WiFi signals, with comparable performance to image-based approaches. Microsoft introduces VALL-E, a text-to-speech AI model that can mimic anyone’s voice with only three seconds of sample input. The Cambridge Handbook of Responsible AI is the book of the week, with numerous essays on the philosophical, ethical, legal, and societal challenges that AI brings; Cambridge has made the book open-access online. And finally, Sam Bendett joins for an update on the latest AI and autonomy-related information from Russia as well as Ukraine.

Up next
Feb 2023
All Good Things
For the final (for now?) episode of AI with AI, Andy and Dave discuss the latest in AI news and research, including a political declaration from the US Department of State on the responsible military use of AI and autonomy. NATO begins work on an AI certification standard. The IE ... Show More
28m 29s
Feb 2023
Up, Up, and Autonomy!
Andy and Dave discuss the latest in AI news and research, including the update of the Department of Defense Directive 3000.09 on Autonomy in Weapon Systems. NIST releases the first version of its AI Risk Management Framework. The National AI Research Resource (NAIRR) Task Force p ... Show More
37m 19s
Jan 2023
EmerGPT
Andy and Dave discuss the latest in AI and autonomy news and research, including a report from Human Center AI that assesses progress (or lack thereof) of the implementation of the three pillars of America’s strategy for AI innovation. The Department of Energy is offering up a to ... Show More
36m 5s
Recommended Episodes
Feb 2023
In Machines We Trust: The AI in the newsroom
We asked ChatGPT to summarize this episode and this is what it wrote: "In the episode, the host discussed the increasing use of AI language models like ChatGPT in newsrooms. The host explained that ChatGPT, a large language model developed by OpenAI, is being used to automate tas ... Show More
18m 13s
Dec 2022
Ethical AI
In this episode of High Theory, Alex Hanna talks with Nathan Kim about Ethical AI. Their conversation is part of our High Theory in STEM series, which tackles topics in science, technology, engineering, and medicine from a highly theoretical perspective. In this episode, Alex hel ... Show More
22m 39s
Dec 2022
Can ChatGPT Make This Podcast?
It’s writing podcast scripts, finishing students’ homework and correcting mistakes in computer code: ChatGPT, the A.I. chatbot from OpenAI, is suddenly everywhere. Who should decide how it’s built? What could go wrong? And what could go right?On today’s episode:Aviv Ovadya, a tec ... Show More
57m 57s
Oct 2023
Ethics in AI
As the adage states, “with great power comes great responsibility” and as Artificial intelligence emerges as one of the fastest growing tools across multiple sectors of industry, there is a responsibility of leaders in these fields to determine a code of ethics in how AI will be ... Show More
28m 21s
Nov 2023
The Promise and Peril of Open Source AI with Elizabeth Seger and Jeffrey Ladish
As AI development races forward, a fierce debate has emerged over open source AI models. So what does it mean to open-source AI? Are we opening Pandora’s box of catastrophic risks? Or is open-sourcing AI the only way we can democratize its benefits and dilute the power of big tec ... Show More
38m 44s
Mar 2023
Babbage: Is GPT-4 the dawn of true artificial intelligence?
OpenAI's ChatGPT, an advanced chatbot, has taken the world by storm, amassing over 100 million monthly active users and exhibiting unprecedented capabilities. From crafting essays and fiction to designing websites and writing code, you’d be forgiven for thinking there’s little it ... Show More
43m 22s
Apr 2023
Can we control Artificial Intelligence?
Last month a company in San Francisco called OpenAi released an artificial intelligence system called GPT-4 - a successor to its hugely popular AI chatbot ChatGPT. The latest version can respond to images, write captions and descriptions - processing up to 25,000 words at a time. ... Show More
49m 15s
Mar 2023
Should We 'Pause' AI?
It's been another month of impressive and unsettling AI breakthroughs. And, along with excitement, these breakthroughs have also sparked concerns about the risks AI could pose to society. Take OpenAI's release of GPT-4, the latest iteration of its ChatGPT chatbot. According to th ... Show More
12m 50s