logo
episode-header-image
Sep 2021
39m 49s

The WHO AI: I Can’t Explain (My Generati...

CNA
About this episode

Andy and Dave discuss the latest in AI news, including an overview of Tesla’s “AI Day,” which among other things, introduced the Dojo supercomputers specialized for ML, the HydraNet single deep-learning model architecture, and a “humanoid robot,” the Tesla Bot. Researchers at Brown University introduce neurograins, grain-of-salt-sized wireless neural sensors, for which they use nearly 50 to record neural activity in a rodent. The Associated Press reports on the flaws in ShotSpotter’s AI gunfire detection system, and one case which used such evidence to send a man to jail for almost a year before a judge dismissed the case. The Department of the Navy releases its Science and Technology Strategy for Intelligent Autonomous Systems (publicly available), including an Execution Plan (available only through government channels). The National AI Research Resource Task Force extends its deadline for public comment in order to elicit more responses. The Group of Governmental Experts on Certain Conventional Weapons holds its first 2021 session for the discussion of lethal autonomous weapons systems; their agenda has moved on to promoting a common understanding and definition of LAWS. And Stanford’s Center for Research on Foundation Models publishes a manifesto: On the Opportunities and Risks of Foundation Models, seeking to establish high level principles on massive models (such as GPT3) upon which many other AI capabilities build. In research, Georgie Institute of Technology, Cornell University, and IBM Research AI examine how the “who” in Explainable AI (e.g., people with or without a background in AI) shapes the perception of AI explanations. And Alvy Ray Smith pens the book of the week, with A Biography of the Pixel, examining the pixel as the “organizing principle of all pictures, from cave paintings to Toy Story.”

Follow the link below to visit our website and explore the links mentioned in the episode.

https://www.cna.org/CAAI/audio-video



Up next
Feb 2023
All Good Things
For the final (for now?) episode of AI with AI, Andy and Dave discuss the latest in AI news and research, including a political declaration from the US Department of State on the responsible military use of AI and autonomy. NATO begins work on an AI certification standard. The IE ... Show More
28m 29s
Feb 2023
Up, Up, and Autonomy!
Andy and Dave discuss the latest in AI news and research, including the update of the Department of Defense Directive 3000.09 on Autonomy in Weapon Systems. NIST releases the first version of its AI Risk Management Framework. The National AI Research Resource (NAIRR) Task Force p ... Show More
37m 19s
Jan 2023
Dr. GPT
Andy and Dave discuss the latest in AI news and research, starting with an education program from AI that teaches US Air Force personnel the fundamentals of AI across three types: leaders, developers, and users. The US Equal Employment Opportunity Commission unveils its draft Str ... Show More
36m 54s
Recommended Episodes
Dec 2022
Ethical AI
In this episode of High Theory, Alex Hanna talks with Nathan Kim about Ethical AI. Their conversation is part of our High Theory in STEM series, which tackles topics in science, technology, engineering, and medicine from a highly theoretical perspective. In this episode, Alex hel ... Show More
22m 39s
Feb 2023
In Machines We Trust: The AI in the newsroom
We asked ChatGPT to summarize this episode and this is what it wrote: "In the episode, the host discussed the increasing use of AI language models like ChatGPT in newsrooms. The host explained that ChatGPT, a large language model developed by OpenAI, is being used to automate tas ... Show More
18m 13s
Nov 2023
The Promise and Peril of Open Source AI with Elizabeth Seger and Jeffrey Ladish
As AI development races forward, a fierce debate has emerged over open source AI models. So what does it mean to open-source AI? Are we opening Pandora’s box of catastrophic risks? Or is open-sourcing AI the only way we can democratize its benefits and dilute the power of big tec ... Show More
38m 44s
Mar 2024
Microsoft's $100B AI Supercomputer, UK Actors vs. BBC's AI, and NYC's AI Gun Detectors Debate
Discover the latest breakthroughs and controversies in artificial intelligence with today's episode of Discover Daily. Join us as we explore Microsoft and OpenAI's $100 billion "Stargate" supercomputer project, set to revolutionize AI development with millions of specia ... Show More
5m 59s
Nov 2023
#156 Gordon Crovitz: Will AI SPREAD Misinformation & Fake News?
This episode is sponsored by Celonis, the global leader in process mining. AI has landed and enterprises are adapting. To give customers’ slick experiences and teams the technology to deliver. The road is long, but you’re closer than you think. Your business processes run through ... Show More
48m 16s
Mar 2023
The AI Dilemma
You may have heard about the arrival of GPT-4, OpenAI’s latest large language model (LLM) release. GPT-4 surpasses its predecessor in terms of reliability, creativity, and ability to process intricate instructions. It can handle more nuanced prompts compared to previous releases, ... Show More
42m 25s
Jan 2023
24. Artificial Intelligence: What Is It? What Is It Not? (feat. Susan Farrell, Principal UX Researcher at mmhmm.app)
The term artificial intelligence, AI, is having a bit of a boom, with the explosion in popularity of tools like ChatGPT, Lensa, DALL•E 2, and many others. The praises of AI have been equally met with skepticism and criticism, with cautionary tales about AI information quality, pl ... Show More
35m 35s
Mar 2024
How AI Solutions Unlock Efficiency | PolyAI's Nikola Mrkšić | Artificial Intelligence Podcast
In this episode of the AI For All Podcast, Nikola Mrkšić, CEO and Co-Founder of PolyAI, joins Ryan Chacon to discuss AI-powered solutions and voice assistants in enhancing organizational efficiency and improving the customer service experience. The conversation explores the growi ... Show More
24m 58s
Jan 2024
Why AI Should Be Taught to Know Its Limits
One of AI’s biggest, unsolved problems is what the advanced algorithms should do when they confront a situation they don’t have an answer for. For programs like Chat GPT, that could mean providing a confidently wrong answer, what’s often called a “hallucination”; for others, as w ... Show More
17m 43s
Dec 2023
The case for AI optimism
AI doomerism and calls to regulate the emerging technology is at a fever pitch but today’s guest, Reid Hoffman is a vocal AI optimist who views slowing down innovation as anti-humanistic. Reid needs no introduction, he’s the co-founder of PayPal, Linkedin, and most recently Infle ... Show More
47m 13s