logo
episode-header-image
Nov 2021
37m 33s

The Ode to Decoy

CNA
About this episode

Andy and Dave discuss the latest in AI news and research, including: NATO releases its first AI strategy, which included the announcement of a one billion euro “NATO innovation fund.” [0:52] Military research labs in the US and UK collaborate on autonomy and AI in a combined demonstration, integrating algorithms and automated workflows into military operations. [2:58] A report from CSET and MITRE identifies that the Department of Defense already has a number of AI and related experts, but that the current system hides this talent. [6:45] The National AI Research Resource Task Force partners with Stanford’s Human-Centered AI and the Stanford Law School to publish Building a National AI Research Resource: A Blueprint for the National Research Cloud. [6:45]  And in a trio of “AI fails,” a traffic camera in the UK mistakes a woman for a car and issues a fine to the vehicle’s owner; [9:10] the Allen Institute for AI introduces Delphi as a step toward developing AI systems that behave ethically (though it sometimes thinks that it’s OK to murder everybody if it creates jobs); [10:07] and a WSJ report reveals that Facebook’s automated moderation tools were falling far short on accurate identification of hate speech and videos of violence and incitement. [12:22] Ahmed Elgammal from Rutgers teams up with Playform to compose two movements for Beethoven’s Tenth Symphony, for which the composer left only sketches before he died. And finally, Andy and Dave welcome Dr. Heather Wolters and Dr. Megan McBride to discuss their latest research on the Psychology of (Dis)Information, with a pair of publications, one providing a primer on key psychological mechanisms, and another examining case studies and their implications.

The Psychology of (Dis)information: A Primer on
Key Psychological Mechanisms: https://www.cna.org/CNA_files/PDF/The%20Psychology-of-(Dis)information-A-Primer-on-Key-Psychological-Mechanisms.pdf

The Psychology of (Dis)information: Case Studies
and Implications: https://www.cna.org/CNA_files/PDF/The-Psychology-of-(Dis)information-Case-Studies-and-Implications.pdf

Follow the link below to visit our website and explore the links mentioned in the episode.

https://www.cna.org/CAAI/audio-video
Up next
Feb 2023
All Good Things
For the final (for now?) episode of AI with AI, Andy and Dave discuss the latest in AI news and research, including a political declaration from the US Department of State on the responsible military use of AI and autonomy. NATO begins work on an AI certification standard. The IE ... Show More
28m 29s
Feb 2023
Up, Up, and Autonomy!
Andy and Dave discuss the latest in AI news and research, including the update of the Department of Defense Directive 3000.09 on Autonomy in Weapon Systems. NIST releases the first version of its AI Risk Management Framework. The National AI Research Resource (NAIRR) Task Force p ... Show More
37m 19s
Jan 2023
Dr. GPT
Andy and Dave discuss the latest in AI news and research, starting with an education program from AI that teaches US Air Force personnel the fundamentals of AI across three types: leaders, developers, and users. The US Equal Employment Opportunity Commission unveils its draft Str ... Show More
36m 54s
Recommended Episodes
Dec 2022
Ethical AI
In this episode of High Theory, Alex Hanna talks with Nathan Kim about Ethical AI. Their conversation is part of our High Theory in STEM series, which tackles topics in science, technology, engineering, and medicine from a highly theoretical perspective. In this episode, Alex hel ... Show More
22m 39s
Mar 2023
The AI Dilemma
You may have heard about the arrival of GPT-4, OpenAI’s latest large language model (LLM) release. GPT-4 surpasses its predecessor in terms of reliability, creativity, and ability to process intricate instructions. It can handle more nuanced prompts compared to previous releases, ... Show More
42m 25s
May 2022
Debiasing AI and Machine Learning With Journalist Jacob Ward and Kathy Baxter, Architect of Ethical AI Practice, Salesforce
As we train computers to complete repetitive tasks for us automatically, we need to be keenly aware of the biases we may be subconsciously programming. So, what questions should we be asking as consumers and employees in a world augmented by computer assistance? The Principle Arc ... Show More
24m 18s
Nov 2023
Is AI the new Oppenheimer event?
AI is reaching the point where there is wild speculation, potential regulations, and opinions on if it will destroy humanity. So how does anyone make sense of all of it?SHOW: 770CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST - "CLOUDCAST BASICS"SH ... Show More
23m 8s
Jan 2023
24. Artificial Intelligence: What Is It? What Is It Not? (feat. Susan Farrell, Principal UX Researcher at mmhmm.app)
The term artificial intelligence, AI, is having a bit of a boom, with the explosion in popularity of tools like ChatGPT, Lensa, DALL•E 2, and many others. The praises of AI have been equally met with skepticism and criticism, with cautionary tales about AI information quality, pl ... Show More
35m 35s
May 2017
Integrating Psycholinguistics into AI with Dominique Simmons - TWiML Talk #23
I think you’re really going to enjoy today’s show. Our guest this week is Dominique Simmons, Applied research Scientist at AI tools vendor Dimensional Mechanics. Dominique brings an interesting background in Cognitive Psychology and psycholinguistics to her work and research in A ... Show More
1 h
Jul 2023
Your Guides Through the Hellscape of AI Hype
Alex Hanna, the director of research at the Distributed AI Research Institute and Emily M. Bender, a professor of linguistics at the University of Washington, are the hosts of Mystery AI Hype Theater 3000, a show that seeks to "break down the AI hype, separate fact from fiction, ... Show More
25m 41s
Jan 2024
Why AI Should Be Taught to Know Its Limits
One of AI’s biggest, unsolved problems is what the advanced algorithms should do when they confront a situation they don’t have an answer for. For programs like Chat GPT, that could mean providing a confidently wrong answer, what’s often called a “hallucination”; for others, as w ... Show More
17m 43s
Mar 2024
Microsoft's $100B AI Supercomputer, UK Actors vs. BBC's AI, and NYC's AI Gun Detectors Debate
Discover the latest breakthroughs and controversies in artificial intelligence with today's episode of Discover Daily. Join us as we explore Microsoft and OpenAI's $100 billion "Stargate" supercomputer project, set to revolutionize AI development with millions of specia ... Show More
5m 59s
Nov 2023
Superintelligent AI: The Utopians
If even AI companies are fretting about the existential threat that human-level AI poses, why are they building these machines in the first place? And as they press ahead, a debate is raging about how we regulate this emergent sector to keep it under control. In the second episod ... Show More
24m 48s