UK to Spend £100 Million on Chips to Develop Domestic Artificial Intelligence Systems
The UK is reportedly preparing to plough £100 million of taxpayer cash into buying chips needed for advanced artificial intelligence models.
The UK is reportedly preparing to plough £100 million of taxpayer cash into buying chips needed for advanced artificial intelligence models.
The New York Times is considering taking legal action against OpenAI, the creator of the massively popular AI chatbot ChatGPT, amid growing tensions over copyright infringement.
A study of OpenAI’s ChatGPT, conducted by researchers at the University of East Anglia in the UK, shows that the market-leading AI chatbot has a clear bias towards leftist political parties.
Already struggling to overcome its documented political bias, ChatGPT creators OpenAI are boasting of their AI technology’s capability to power content moderation, i.e. censorship.
Tech giant Amazon was one of the last big tech companies to join the generative AI gold rush, announcing its own Titan large language model in April, after Google announced Bard and Facebook launched LLaMA, following the massive success of OpenAI’s ChatGPT last year.
Worldcoin, the new cryptocurrency project that aims to create a global ID system tied to people’s unique biometric data by scanning everyone’s irises, says its ID system will be open to governments and companies around the world.
Major tech companies including Amazon, Google, Facebook (now known as Meta), Microsoft, and ChatGPT developer OpenAI have agreed to adhere to a set of AI safeguards brokered by the Biden administration. One expert commented, “History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations.”
A recent study conducted by Stanford University has unveiled significant performance fluctuations in OpenAI’s AI chatbot, ChatGPT, over a span of a few months. When the study examined OpenAI’s GPT-4 AI chatbot, it recognized a prime number 97.6 percent of the time in March, but just 2.4 percent of the time in June.
The FTC has initiated an investigation into OpenAI’s popular AI chatbot, ChatGPT, over allegations of causing harm by publishing false information about individuals along with potential data security issues.
Comedian Sarah Silverman is suing Facebook parent company Meta as well as ChatGPT firm OpenAI for copyright infringement, claiming they used her content without permission to train artificial intelligence language models.
The increasing popularity of OpenAI’s AI chatbot, ChatGPT, has led to a surge in cybersecurity threats, with over 101,000 compromised ChatGPT account login credentials found on dark web marketplaces in the past year. Compromised accounts put ChatGPT users’ privacy at risk because the system keeps a record of chats which may include sensitive information and personal data.
In a recent article, the Wall Street Journal details how the high-profile partnership between tech giant Microsoft and artificial intelligence pioneer OpenAI is charting a new course in the tech industry, marked by both groundbreaking collaboration and behind-the-scenes conflict.
Sam Altman, the CEO of ChatGPT developer OpenAI, called for a collaboration between American and Chinese researchers to counter the risks of artificial intelligence (AI).
OpenAI, the company behind the AI model ChatGPT, is being sued for defamation due to false information generated by its system in what could become a landmark case. The chatbot falsely accused a radio host of embezzlement and defrauding a charity.
GIPPR AI, an implementation of the ChatGPT AI chatbot designed to curtail the original version’s widely documented leftist bias, has been shut down by ChatGPT creator OpenAI.
The European Commission has ordered its staff to cease using artificial intelligence for work that is of “critical” importance.
More than 350 executives, researchers, and engineers from leading artificial intelligence companies have signed an open letter cautioning that the AI technology they are developing could pose an existential threat to humanity.
A New York-based attorney is facing potential sanctions after using OpenAI’s ChatGPT to write a legal brief he submitted to the court. The problem? The AI Chatbot filled the brief with citations to fictitious cases, a symptom of AI chatbots called “hallucinating.” In an affadavit, the lawyer claimed, “I was unaware of the possibility that [ChatGPT’s] content could be false.”
In the midst of the booming AI industry, Timnit Gebru, a former lead researcher on Google’s ethical AI team who was fired by the Silicon Valley Masters of the Universe, is cautioning against potential dangers. She argues that the rapid growth in the field, akin to a “gold rush,” is sidelining important ethical safeguards, and calls for more external regulation.
Apple has placed restrictions on its employees’ use of generative AI tools like OpenAI’s ChatGPT and GitHub’s Copilot, citing data security concerns. According to an internal communication, Apple believes that “Generative AIs, while powerful, can potentially collect and share confidential data, leading to a breach of our security protocols.”
Elon Musk’s Twitter is accusing Microsoft of violating its data use policy, saying the tech giant has not adhered to its agreement for data use “for an extended period of time.”
OpenAI CEO Sam Altman will attend the secretive Bilderberg Meeting, an annual gathering of over 100 political and corporate leaders from Europe and North America, which has announced AI as a key item on its agenda this year.
OpenAI CEO Sam Altman recently testified to Congress about the potential risks and implications of AI technologies like ChatGPT, which have gained significant popularity in recent months.
A growing, hidden workforce of AI trainers is playing a crucial role in developing artificial intelligence systems like ChatGPT, despite the lack of benefits and recognition. In fact, the vaunted power of AI systems rely on this hidden army of contractors making $15 an hour.
The White House announced a plan to crack down on artificial intelligence (AI) on Thursday — amid growing concerns over the advanced technology possibly replacing humanity someday — naming Vice President Kamala Harris as “AI Czar” in charge of the new initiative.
A former OpenAI researcher, Paul Christiano, has expressed serious concerns about the potential risk that AI poses to humanity, estimating a 10-20 percent chance of an AI takeover resulting in a high number of human fatalities. According to Christiano, if AI reaches “human level” thinking, the human race approaches “a 50/50 chance of doom.”
SpaceX, Tesla, and Twitter CEO Elon Musk met with Senate Majority Leader Chuck Schumer (D-NY) on Wednesday to discuss AI regulation, a topic on which the billionaire has been vocal on.
Microsoft will report its earnings today, amid a dramatic shift in perceptions of the company due to its multi-billion dollar investment in OpenAI, developers of the market-leading ChatGPT AI chatbot. However, Microsoft also has sluggish growth in its cloud business and steep declines in desktop PC purchases to contend with.
Google’s unreleased Bard AI chatbot received highly negative feedback from its own employees during testing, raising concerns about the company’s approach to AI ethics as it competes with OpenAI’s popular ChatGPT.
Google is reportedly scrambling to release its AI-powered search engine project as soon as possible in an attempt to catch up with Microsoft’s AI-powered Bing search engine built in partnership with ChatGPT powerhouse OpenAI.
A third-party implementation of OpenAI’s GPT-3.5 large language model is instructed to “destroy humanity” and “cause chaos and destruction,” among other nefarious goals.
The mayor of Hepburn Shire Council in Australia, Brian Hood, has threatened to sue OpenAI over ChatGPT. The AI accused Hood of being guilty of bribery and corruption in relation to a case where he was actually a whistleblower.
Google CEO Sundar Pichai recently announced the company’s plans to integrate conversational AI into its search engine amid competition from chatbots such as OpenAI’s ChatGPT and Microsoft’s Bing.
The Center for AI and Digital Policy (CAIDP), an offshoot of the Democrat-aligned Michael Dukakis Institute for Leadership and Innovation, is pressing the Federal Trade Commission (FTC) to put the brakes on OpenAI.
Midjourney, which along with Stability AI’s Stable Diffusion and Open AI’s DALL-E is one of the leading AI image generating services, has shut down its free edition as it attempts to clamp down on the spread of deepfake images.
The AI gold rush in the tech industry continues, representing one of the few growth areas in an industry that is facing cutbacks elsewhere.
A loud voice of doom in the debate over AI has emerged: Eliezer Yudkowsky of the Machine Intelligence Research Institute, who is calling for a total shutdown on the development of AI models more powerful than GPT-4, owing to the possibility that it could kill “every single member of the human species and all biological life on Earth.”
1,000 AI experts, including Tesla and Twitter CEO Elon Musk and Apple co-founder Steve Wozniak, have called for a temporary halt on the advancement of AI technology until safeguards can be put in place.
OpenAI CEO Sam Altman said ChatGPT “will make a lot of jobs just go away” in an interview with Lex Fridman.
Researchers at market-leading AI firm OpenAI and at the University of Pennsylvania are predicting that up to 80 percent of jobs could be impacted by AI technologies, which are rapidly increasing in sophistication.