The use of artificial intelligence (AI) in medicine has both promising potential and cautionary tales. While AI has shown promise in assisting doctors as scribes, providing second opinions, and organizing back-office tasks, there have been instances of false alarms and deepening health disparities. The Food and Drug Administration (FDA) is grappling with how to regulate and describe AI programs that help doctors detect various medical conditions. President Biden has issued an executive order to manage security and privacy risks in AI, including in healthcare. However, there is still much debate and uncertainty surrounding the oversight and effectiveness of AI in medicine. The FDA’s role has been criticized for its limited vetting and lack of transparency in the programs it approves. Doctors are hesitant to fully embrace AI without more confidence in its efficacy and safety. Large health systems and insurers can create their own AI tools with little government oversight. Despite some success stories, such as AI programs detecting brain clots and improving patient outcomes, there are concerns about the lack of publicly available information and the potential for unnecessary procedures and higher medical costs. Efforts are being made to evaluate and publish findings on FDA-cleared AI programs, but there is a need for updated regulations and a comprehensive regulatory framework to ensure the responsible and effective use of AI in medicine.