- AI in 5
- Posts
- Bye Bye Bard... ChatGPT Agents are Coming!
Bye Bye Bard... ChatGPT Agents are Coming!
Bye Bye Bard…. Hello Gemini!
The Keyword
Google has unveiled Gemini, a rebranding of its Bard chatbot and Duet AI features within Google Workspace, along with the release of a dedicated Gemini app for Android devices. The Gemini app offers users access to conversational and multimodal AI capabilities, positioning it as a potential replacement for Google Assistant. Additionally, Google has launched Gemini Ultra 1.0, the most robust version of its large language model, as part of the new Google One AI Premium plan, which includes 2TB of Google Drive storage and other perks for $20 per month. While Google's in-search AI remains known as the Search Generative Experience for now, it is anticipated to be rebranded as Gemini in the future.
Why it matters: Google's introduction of Gemini marks a significant evolution in its AI offerings, consolidating various AI capabilities under one unified platform. By integrating chatbot functionality, multimodal capabilities, and a powerful language model like Gemini Ultra, Google aims to provide users with enhanced AI experiences across its ecosystem.
EU Unanimously Passes Landmark AI Act
Foreign Policy
The European Union has achieved a significant milestone by unanimously approving the final draft of the AI Act, marking a pivotal moment in the regulation of artificial intelligence applications. The Act introduces comprehensive governance and transparency requirements, particularly for high-risk AI, and aims to foster innovation in trustworthy AI while ensuring a predictable regulatory environment for AI companies. Following potential adoption before summer, the Act will undergo an implementation period before coming into effect, potentially shaping global industries, including military applications.
Why it matters: The approval of the EU's AI Act after two years of negotiations signifies a crucial step toward establishing a framework for the ethical and safe use of AI. With regulations addressing various aspects of AI applications, from prohibiting certain uses to introducing governance rules and transparency requirements, the Act aims to strike a balance between innovation and safety.
Facebook Safeguarding Against Deepfake Audio with Imperceptible Watermarking
Meta
In response to the growing threat of deepfake audio, Facebook Research has developed AudioSeal, a groundbreaking system that embeds imperceptible watermarks into synthetic speech to detect AI-generated manipulations. As AI voice synthesis technology advances, distinguishing between real and fake human speech has become increasingly difficult, raising concerns about potential misuse such as voice cloning and deepfakes. AudioSeal's active approach marks generated voices with watermarks, enabling precise identification of manipulated audio, surpassing previous detection methods in speed, efficiency, and robustness against editing techniques.
Why it matters: AudioSeal's development marks a significant advancement in combating the proliferation of deepfake audio, offering a promising solution to safeguard against the spread of manipulated content.
Hugging Face Unveils Open-Source AI Assistant Maker to Challenge Custom GPTs
Hugging Faces
Hugging Face introduces Hugging Chat Assistants, enabling users to develop personalized AI chatbots at no cost, positioning itself as a competitor to OpenAI's custom GPT Builder. Unlike OpenAI's proprietary models, Hugging Chat Assistants leverage diverse open-source large language models (LLMs) like Mistral's Mixtral or Meta's Llama 2, emphasizing user choice and customization. The platform features a centralized repository for users to share and access tailored Assistants.
Why it matters: Hugging Face's release of Hugging Chat Assistants marks a significant development in the realm of open-source AI, offering users the ability to create personalized chatbots without the financial constraints associated with proprietary solutions. By leveraging a variety of open-source models, the platform promotes user autonomy and fosters innovation within the AI community.
Microsoft and OpenAI Partner with Semafor to Offer Breaking News Delivery with AI
Observer
Microsoft and OpenAI have joined forces with news startup Semafor to introduce a groundbreaking AI-powered breaking news feed named "Signals". Semafor's journalists will leverage MISO, a custom GPT built on OpenAI's platform, to scour diverse global sources and swiftly curate multi-source news stories. This collaboration aims to streamline the news curation process, providing readers with timely and reliable updates while offering Semafor a significant boost in resources and influence in shaping the AI media landscape.
Why it matters: The partnership comes amidst debates within the news industry regarding the role of AI, highlighted by the New York Times' lawsuit against OpenAI, underscoring the urgency for media leaders to adapt to the evolving AI-driven journalism landscape.
Meta and OpenAI Spearhead Efforts to Combat AI-Generated Disinformation
Meta
In response to the proliferation of AI-generated images and deepfakes posing significant threats to public discourse, Meta and OpenAI have announced initiatives to address the issue. Meta is developing an AI-powered tool to automatically detect and label AI-generated images across its platforms, utilizing invisible watermarks and embedded metadata to distinguish authentic content from manipulated media. OpenAI, on the other hand, is implementing visible watermarks in the metadata of images generated by DALL-E 3, aimed at enhancing trust in digital information.
Why it matters: As AI-generated media continues to blur the lines between reality and deception, initiatives by Meta and OpenAI signify a proactive approach to address the challenges posed by AI-driven disinformation. However, there are concerns regarding the effectiveness of automated detection algorithms and the risk of inadvertently undermining human analysis by fostering dependency on corporate-led authenticity verification.
OpenAI Developing Autonomous Agent Software for Task Automation
Yahoo Finance
OpenAI is reportedly pivoting its focus towards the development of "Agents," advanced software designed to automate complex tasks and operate devices autonomously. These Agents will interact with users' devices to execute sophisticated tasks like booking flights or gathering information without the need for continuous human supervision. This strategic shift comes as OpenAI seeks to expand beyond its renowned AI chatbot ChatGPT and establish itself as a leader in the broader AI market.
Why it matters: OpenAI's transition towards developing autonomous agent software signifies a strategic move to capitalize on the growing demand for AI-driven task automation solutions. By leveraging advanced technology supported by Microsoft, OpenAI aims to empower users with software capable of executing intricate tasks independently, thereby streamlining workflows and enhancing productivity.
Microsoft Collaborates with sarvam.ai to Advance AI Capabilities in India
Viestories
Microsoft has teamed up with Indian AI startup sarvam.ai to develop OpenHathi-Hi-0.1, an Indian language equivalent of ChatGPT, aimed at enhancing voice-based interactions. Leveraging Microsoft's cloud services, sarvam.ai will train, host, and scale its AI voice application initially in Hindi, with plans to expand to other Indian languages. This partnership aligns with Microsoft's broader initiative, ADVANTA (I)GE INDIA, which aims to equip 2 million individuals in India with AI skills by 2025. The initiative focuses on upskilling the workforce, government officials, and nonprofit organizations, fostering inclusive socio-economic progress while adhering to responsible AI principles.
Why it matters: The collaboration between Microsoft and sarvam.ai underscores Microsoft's commitment to driving AI innovation and fostering inclusive growth in India. By providing training and support for AI development, particularly in regional languages, the initiative seeks to democratize access to AI technologies and empower individuals across diverse communities.
OpenAI Establishes Child Safety Team to Safeguard Against Misuse of AI Tools by Minors
Tech Times
OpenAI has introduced a dedicated 'Child Safety' team to address concerns surrounding the potential misuse of AI tools by underage users. The team is tasked with developing processes and protocols to manage incidents and reviews related to underage users of OpenAI's tools, aiming to mitigate risks such as plagiarism and misinformation. This initiative follows growing concerns about the use of AI by children and teens for personal issues like anxiety and mental health, as well as calls for regulatory guidelines on AI usage in education from organizations like UNESCO. OpenAI's Child Safety team collaborates with internal policy, legal, and investigation groups, as well as external partners, to ensure the responsible and safe use of AI tools by children, aligning with regulations such as the Children's Online Privacy Protection Rule (COPPA).
Why it matters: The formation of OpenAI's Child Safety team underscores the organization's commitment to promoting the responsible and ethical use of AI tools, particularly among vulnerable populations like children. By establishing robust processes and enforcement mechanisms, OpenAI aims to address concerns surrounding underage usage of AI tools and prevent potential harms such as exposure to inappropriate content or privacy violations.