logo
episode-header-image
Sep 2022
36m 25s

Keep Watching the AIs!

CNA
About this episode

Andy and Dave discuss the latest in AI news and research, starting with a publication from the UK’s National Cyber Security Centre, providing a set of security principles for developers implementing machine learning models. Gartner publishes the 2022 update to its “AI Hype Cycle,” which qualitatively plots the position of various AI efforts along the “hype cycle.” PromptBase opens its doors, promising to provide users with better “prompts” for text-to-image generators (such as DALL-E) to generate “optimal images.” Researchers explore the properties of vanadium dioxide (VO2), which demonstrates volatile memory-like behavior under certain conditions. MetaAI announces a nascent ability to decode speech from a person’s brain activity, without surgery (using EEG and MEG). Unitree Robotics, a Chinese tech company, is producing its Aliengo robotic dog, which can carry up to 11 pounds and perform other actions. Researchers at the University of Geneva demonstrate that transformers can build world models with fewer samples, for example, able to generate “pixel perfect” predictions of Pong after 120 games of training. DeepMind AI demonstrates the ability to teach a team of agents to play soccer by controlling at the level of joint torques and combine it with longer-term goal-directed behavior, where the agents demonstrate jostling for the ball and other behaviors. Researchers at Urbana-Champaign and MIT demonstrate a Composable Diffusion model to tweak and improve the output of text-to-image transformers. Google Research publishes results on AudioLM, which generates “natural and coherent continuations” given short prompts. And Michael Cohen, Marcus Hutter, and Michael Osborne published a paper in AI Magazine, arguing that dire predictions about the threat of advanced AI may not have gone far enough in their warnings, offering a series of assumptions on which their arguments depend.

https://www.cna.org/our-media/podcasts/ai-with-ai

 

Up next
Feb 2023
All Good Things
For the final (for now?) episode of AI with AI, Andy and Dave discuss the latest in AI news and research, including a political declaration from the US Department of State on the responsible military use of AI and autonomy. NATO begins work on an AI certification standard. The IE ... Show More
28m 29s
Feb 2023
Up, Up, and Autonomy!
Andy and Dave discuss the latest in AI news and research, including the update of the Department of Defense Directive 3000.09 on Autonomy in Weapon Systems. NIST releases the first version of its AI Risk Management Framework. The National AI Research Resource (NAIRR) Task Force p ... Show More
37m 19s
Jan 2023
Dr. GPT
Andy and Dave discuss the latest in AI news and research, starting with an education program from AI that teaches US Air Force personnel the fundamentals of AI across three types: leaders, developers, and users. The US Equal Employment Opportunity Commission unveils its draft Str ... Show More
36m 54s
Recommended Episodes
Dec 2022
Ethical AI
In this episode of High Theory, Alex Hanna talks with Nathan Kim about Ethical AI. Their conversation is part of our High Theory in STEM series, which tackles topics in science, technology, engineering, and medicine from a highly theoretical perspective. In this episode, Alex hel ... Show More
22m 39s
Feb 2023
In Machines We Trust: The AI in the newsroom
We asked ChatGPT to summarize this episode and this is what it wrote: "In the episode, the host discussed the increasing use of AI language models like ChatGPT in newsrooms. The host explained that ChatGPT, a large language model developed by OpenAI, is being used to automate tas ... Show More
18m 13s
Nov 2023
The Promise and Peril of Open Source AI with Elizabeth Seger and Jeffrey Ladish
As AI development races forward, a fierce debate has emerged over open source AI models. So what does it mean to open-source AI? Are we opening Pandora’s box of catastrophic risks? Or is open-sourcing AI the only way we can democratize its benefits and dilute the power of big tec ... Show More
38m 44s
Mar 2024
Microsoft's $100B AI Supercomputer, UK Actors vs. BBC's AI, and NYC's AI Gun Detectors Debate
Discover the latest breakthroughs and controversies in artificial intelligence with today's episode of Discover Daily. Join us as we explore Microsoft and OpenAI's $100 billion "Stargate" supercomputer project, set to revolutionize AI development with millions of specia ... Show More
5m 59s
Nov 2023
#156 Gordon Crovitz: Will AI SPREAD Misinformation & Fake News?
This episode is sponsored by Celonis, the global leader in process mining. AI has landed and enterprises are adapting. To give customers’ slick experiences and teams the technology to deliver. The road is long, but you’re closer than you think. Your business processes run through ... Show More
48m 16s
Mar 2023
The AI Dilemma
You may have heard about the arrival of GPT-4, OpenAI’s latest large language model (LLM) release. GPT-4 surpasses its predecessor in terms of reliability, creativity, and ability to process intricate instructions. It can handle more nuanced prompts compared to previous releases, ... Show More
42m 25s
Jan 2023
24. Artificial Intelligence: What Is It? What Is It Not? (feat. Susan Farrell, Principal UX Researcher at mmhmm.app)
The term artificial intelligence, AI, is having a bit of a boom, with the explosion in popularity of tools like ChatGPT, Lensa, DALL•E 2, and many others. The praises of AI have been equally met with skepticism and criticism, with cautionary tales about AI information quality, pl ... Show More
35m 35s
Mar 2024
How AI Solutions Unlock Efficiency | PolyAI's Nikola Mrkšić | Artificial Intelligence Podcast
In this episode of the AI For All Podcast, Nikola Mrkšić, CEO and Co-Founder of PolyAI, joins Ryan Chacon to discuss AI-powered solutions and voice assistants in enhancing organizational efficiency and improving the customer service experience. The conversation explores the growi ... Show More
24m 58s
Jan 2024
Why AI Should Be Taught to Know Its Limits
One of AI’s biggest, unsolved problems is what the advanced algorithms should do when they confront a situation they don’t have an answer for. For programs like Chat GPT, that could mean providing a confidently wrong answer, what’s often called a “hallucination”; for others, as w ... Show More
17m 43s
Dec 2023
The case for AI optimism
AI doomerism and calls to regulate the emerging technology is at a fever pitch but today’s guest, Reid Hoffman is a vocal AI optimist who views slowing down innovation as anti-humanistic. Reid needs no introduction, he’s the co-founder of PayPal, Linkedin, and most recently Infle ... Show More
47m 13s