ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. It’s able to write essays, code and more given short text prompts, hyper-charging productivity. But it also has a more…nefarious side.
In any case, AI tools are not going away — and indeed has expanded dramatically since its launch just a few months ago. Major brands are experimenting with it, using the AI to generate ad and marketing copy, for example.
And OpenAI is heavily investing in it. ChatGPT was recently super-charged by GPT-4, the latest language-writing model from OpenAI’s labs. Paying ChatGPT users have access to GPT-4, which can write more naturally and fluently than the model that previously powered ChatGPT. In addition to GPT-4, OpenAI recently connected ChatGPT to the internet with plugins available in alpha to users and developers on the waitlist.
Timeline of the most recent ChatGPT updates
ChatGPT can now brows the internet (again)
OpenAI posted on Twitter/X that ChatGPT can now browse the internet and is no longer limited to data before September 2021. The chatbot had a web browsing capability for Plus subscribers back in July, but the feature was taken away after users exploited it to get around paywalls.
ChatGPT can now browse the internet to provide you with current and authoritative information, complete with direct links to sources. It is no longer limited to data before September 2021. pic.twitter.com/pyj8a9HWkB
— OpenAI (@OpenAI) September 27, 2023
September 25, 2023
ChatGPT now has a voice
OpenAI announced that it’s adding a new voice for verbal conversations and image-based smarts to the AI-powered chatbot.
September 21, 2023
Poland opens an investigation against OpenAI
The Polish authority publically announced it has opened an investigation regarding ChatGPT — accusing the company of a string of breaches of the EU’s General Data Protection Regulation (GDPR).
September 20, 2023
OpenAI unveils DALL-E 3
The upgraded text-to-image tool, DALL-E 3, uses ChatGPT to help fill in prompts. Subscribers to OpenAI’s premium ChatGPT plans, ChatGPT Plus and ChatGPT Enterprise, can type in a request for an image and hone it through conversations with the chatbot — receiving the results directly within the chat app.
September 7, 2023
Opera GX integrates ChatGPT-powered AI
Powered by OpenAI’s ChatGPT, the AI browser Aria launched on Opera in May to give users an easier way to search, ask questions and write code. Today, the company announced it is bringing Aria to Opera GX, a version of the flagship Opera browser that is built for gamers.
The new feature allows Opera GX users to interact directly with a browser AI to find the latest gaming news and tips.
August 31, 2023
OpenAI releases a guide for teachers using ChatGPT in the classroom
OpenAI wants to rehabilitate the system’s image a bit when it comes to education, as ChatGPT has been controversial in the classroom due to plagiarism. OpenAI has offered up a selection of ways to put the chatbot to work in the classroom.
August 28, 2023
OpenAI launches ChatGPT Enterprise
ChatGPT Enterprise can perform the same tasks as ChatGPT, such as writing emails, drafting essays and debugging computer code. However, the new offering also adds “enterprise-grade” privacy and data analysis capabilities on top of the vanilla ChatGPT, as well as enhanced performance and customization options.
Survey finds relatively few American actually use ChatGPT
Recent Pew polling suggests the language model isn’t quite as popular or threatening as some would have you think. Ongoing polling by Pew Research shows that although ChatGPT is gaining mindshare, only about 18% of Americans have ever actually used it.
August 22, 2023
OpenAI brings fine-tuning to GPT-3.5 Turbo
With fine-tuning, companies using GPT-3.5 Turbo through the company’s API can make the model better follow specific instructions. For example, having the model always respond in a given language. Or improving the model’s ability to consistently format responses, as well as hone the “feel” of the model’s output, like its tone, so that it better fits a brand or voice. Most notably, fine-tuning enables OpenAI customers to shorten text prompts to speed up API calls and cut costs.
OpenAI is partnering with Scale AI to allow companies to fine-tune GPT-3.5. However, it is unclear whether OpenAI is developing an in-house tuning tool that is meant to complement platforms like Scale AI or serve a different purpose altogether.
- Training: $0.008 / 1K tokens
- Usage input: $0.012 / 1K tokens
- Usage output: $0.016 / 1K tokens
August 16, 2023
OpenAI acquires Global Illumination
In OpenAI’s first public acquisition in its seven-year history, the company announced it has acquired Global Illumination, a New York-based startup leveraging AI to build creative tools, infrastructure and digital experiences.
“We’re very excited for the impact they’ll have here at OpenAI,” OpenAI wrote in a brief post published to its official blog. “The entire team has joined OpenAI to work on our core products including ChatGPT.”
August 10, 2023
The ‘custom instructions’ feature is extended to free ChatGPT users
OpenAI announced that it’s expanding custom instructions to all users, including those on the free tier of service. The feature allows users to add various preferences and requirements that they want the AI chatbot to consider when responding.
August 1, 2023
China requires AI apps to obtain an administrative license
Multiple generative AI apps have been removed from Apple’s China App Store ahead of the country’s latest generative AI regulations that are set to take effect August 15.
“As you may know, the government has been tightening regulations associated with deep synthesis technologies (DST) and generative AI services, including ChatGPT. DST must fulfill permitting requirements to operate in China, including securing a license from the Ministry of Industry and Information Technology (MIIT),” Apple said in a letter to OpenCat, a native ChatGPT client. “Based on our review, your app is associated with ChatGPT, which does not have requisite permits to operate in China.”
July 25, 2023
ChatGPT for Android is now available in the US, India, Bangladesh and Brazil
July 21, 2023
ChatGPT is coming to Android
The ChatGPT app on Android looks to be more or less identical to the iOS one in functionality, meaning it gets most if not all of the web-based version’s features. You should be able to sync your conversations and preferences across devices, too — so if you’re iPhone at home and Android at work, no worries.
July 20, 2023
OpenAI launches customized instructions for ChatGPT
OpenAI launched custom instructions for ChatGPT users, so they don’t have to write the same instruction prompts to the chatbot every time they interact with it.
The company said this feature lets you “share anything you’d like ChatGPT to consider in its response.” For example, a teacher can say they are teaching fourth-grade math or a developer can specify the code language they prefer when asking for suggestions. A person can also specify their family size, so the text-generating AI can give responses about meals, grocery and vacation planning accordingly.
July 13, 2023
The FTC is reportedly investigating OpenAI
The FTC is reportedly in at least the exploratory phase of investigation over whether OpenAI’s flagship ChatGPT conversational AI made “false, misleading, disparaging or harmful” statements about people.
TechCrunch Reporter Devin Coldewey reports:
This kind of investigation doesn’t just appear out of thin air — the FTC doesn’t look around and say “That looks suspicious.” Generally a lawsuit or formal complaint is brought to their attention and the practices described by it imply that regulations are being ignored. For example, a person may sue a supplement company because the pills made them sick, and the FTC will launch an investigation on the back of that because there’s evidence the company lied about the side effects.
July 6, 2023
OpenAI announced the general availability of GPT-4
Starting July 6, all existing OpenAI developers “with a history of successful payments” can access GPT-4. OpenAI plans to open up access to new developers by the end of July.
In the future, OpenAI says that it’ll allow developers to fine-tune GPT-4 and GPT-3.5 Turbo, one of the original models powering ChatGPT, with their own data, as has long been possible with several of OpenAI’s other text-generating models. That capability should arrive later this year, according to OpenAI.
June 28, 2023
ChatGPT app can now search the web only on Bing
OpenAI announced that subscribers to ChatGPT Plus can now use a new feature on the app called Browsing, which allows ChatGPT to search Bing for answers to questions.
The Browsing feature can be enabled by heading to the New Features section of the app settings, selecting “GPT-4” in the model switcher and choosing “Browse with Bing” from the drop-down list. Browsing is available on both the iOS and Android ChatGPT apps.
June 15, 2023
Mercedes is adding ChatGPT to its infotainment system
U.S. owners of Mercedes models that use MBUX will be able to opt into a beta program starting June 16 activating the ChatGPT functionality. This will enable the highly versatile large language model to augment the car’s conversation skills. You can join up simply by telling your car “Hey Mercedes, I want to join the beta program.”
It’s not really clear what for, though.
June 8, 2023
ChatGPT app is now available on iPad, adds support for Siri and Shortcuts
The new ChatGPT app version brings native iPad support to the app, as well as support for using the chatbot with Siri and Shortcuts. Drag and drop is also now available, allowing users to drag individual messages from ChatGPT into other apps.
On iPad, ChatGPT now runs in full-screen mode, optimized for the tablet’s interface.
May 30, 2023
Texas judge orders all AI-generated content must be declared and checked
The Texas federal judge has added a requirement that any attorney appearing in his court must attest that “no portion of the filing was drafted by generative artificial intelligence,” or if it was, that it was checked “by a human being.”
May 26, 2023
ChatGPT app expanded to more than 30 countries
The list of new countries includes Algeria, Argentina, Azerbaijan, Bolivia, Brazil, Canada, Chile, Costa Rica, Ecuador, Estonia, Ghana, India, Iraq, Israel, Japan, Jordan, Kazakhstan, Kuwait, Lebanon, Lithuania, Mauritania, Mauritius, Mexico, Morocco, Namibia, Nauru, Oman, Pakistan, Peru, Poland, Qatar, Slovenia, Tunisia and the United Arab Emirates.
May 25, 2023
ChatGPT app is now available in 11 more countries
OpenAI announced in a tweet that the ChatGPT mobile app is now available on iOS in the U.S., Europe, South Korea and New Zealand, and soon more will be able to download the app from the app store. In just six days, the app topped 500,000 downloads.
The ChatGPT app for iOS is now available to users in 11 more countries — Albania, Croatia, France, Germany, Ireland, Jamaica, Korea, New Zealand, Nicaragua, Nigeria, and the UK. More to come soon!
— OpenAI (@OpenAI) May 24, 2023
May 18, 2023
OpenAI launches a ChatGPT app for iOS
When using the mobile version of ChatGPT, the app will sync your history across devices — meaning it will know what you’ve previously searched for via its web interface, and make that accessible to you. The app is also integrated with Whisper, OpenAI’s open source speech recognition system, to allow for voice input.
May 3, 2023
Hackers are using ChatGPT lures to spread malware on Facebook
Meta said in a report on May 3 that malware posing as ChatGPT was on the rise across its platforms. The company said that since March 2023, its security teams have uncovered 10 malware families using ChatGPT (and similar themes) to deliver malicious software to users’ devices.
“In one case, we’ve seen threat actors create malicious browser extensions available in official web stores that claim to offer ChatGPT-based tools,” said Meta security engineers Duc H. Nguyen and Ryan Victory in a blog post. “They would then promote these malicious extensions on social media and through sponsored search results to trick people into downloading malware.”
April 28, 2023
ChatGPT parent company OpenAI closes $300M share sale at $27B-29B valuation
VC firms including Sequoia Capital, Andreessen Horowitz, Thrive and K2 Global are picking up new shares, according to documents seen by TechCrunch. A source tells us Founders Fund is also investing. Altogether the VCs have put in just over $300 million at a valuation of $27 billion to $29 billion. This is separate to a big investment from Microsoft announced earlier this year, a person familiar with the development told TechCrunch, which closed in January. The size of Microsoft’s investment is believed to be around $10 billion, a figure we confirmed with our source.
April 25, 2023
OpenAI previews new subscription tier, ChatGPT Business
Called ChatGPT Business, OpenAI describes the forthcoming offering as “for professionals who need more control over their data as well as enterprises seeking to manage their end users.”
“ChatGPT Business will follow our API’s data usage policies, which means that end users’ data won’t be used to train our models by default,” OpenAI wrote in a blog post. “We plan to make ChatGPT Business available in the coming months.”
April 24, 2023
OpenAI wants to trademark “GPT”
OpenAI applied for a trademark for “GPT,” which stands for “Generative Pre-trained Transformer,” last December. Last month, the company petitioned the USPTO to speed up the process, citing the “myriad infringements and counterfeit apps” beginning to spring into existence.
Unfortunately for OpenAI, its petition was dismissed last week. According to the agency, OpenAI’s attorneys neglected to pay an associated fee as well as provide “appropriate documentary evidence supporting the justification of special action.”
That means a decision could take up to five more months.
April 22, 2023
Auto-GPT is Silicon Valley’s latest quest to automate everything
Auto-GPT is an open-source app created by game developer Toran Bruce Richards that uses OpenAI’s latest text-generating models, GPT-3.5 and GPT-4, to interact with software and services online, allowing it to “autonomously” perform tasks.
Depending on what objective the tool’s provided, Auto-GPT can behave in very… unexpected ways. One Reddit user claims that, given a budget of $100 to spend within a server instance, Auto-GPT made a wiki page on cats, exploited a flaw in the instance to gain admin-level access and took over the Python environment in which it was running — and then “killed” itself.
April 18, 2023
FTC warns that AI technology like ChatGPT could ‘turbocharge’ fraud
FTC chair Lina Khan and fellow commissioners warned House representatives of the potential for modern AI technologies, like ChatGPT, to be used to “turbocharge” fraud in a congressional hearing.
“AI presents a whole set of opportunities, but also presents a whole set of risks,” Khan told the House representatives. “And I think we’ve already seen ways in which it could be used to turbocharge fraud and scams. We’ve been putting market participants on notice that instances in which AI tools are effectively being designed to deceive people can place them on the hook for FTC action,” she stated.
April 17, 2023
Superchat’s new AI chatbot lets you message historical and fictional characters via ChatGPT
The company behind the popular iPhone customization app Brass, sticker maker StickerHub and others is out today with a new AI chat app called SuperChat, which allows iOS users to chat with virtual characters powered by OpenAI’s ChatGPT. However, what makes the app different from the default experience or the dozens of generic AI chat apps now available are the characters offered which you can use to engage with SuperChat’s AI features.
April 12, 2023
Italy gives OpenAI to-do list for lifting ChatGPT suspension order
Italy’s data protection watchdog has laid out what OpenAI needs to do for it to lift an order against ChatGPT issued at the end of last month — when it said it suspected the AI chatbot service was in breach of the EU’s GSPR and ordered the U.S.-based company to stop processing locals’ data.
The DPA has given OpenAI a deadline — of April 30 — to get the regulator’s compliance demands done. (The local radio, TV and internet awareness campaign has a slightly more generous timeline of May 15 to be actioned.)
April 12, 2023
Researchers discover a way to make ChatGPT consistently toxic
A study co-authored by scientists at the Allen Institute for AI shows that assigning ChatGPT a “persona” — for example, “a bad person,” “a horrible person” or “a nasty person” — through the ChatGPT API increases its toxicity sixfold. Even more concerning, the co-authors found having the conversational AI chatbot pose as certain historical figures, gendered people and members of political parties also increased its toxicity — with journalists, men and Republicans in particular causing the machine learning model to say more offensive things than it normally would.
The research was conducted using the latest version, but not the model currently in preview based on OpenAI’s GPT-4.
April 4, 2023
Y Combinator-backed startups are trying to build ‘ChatGPT for X’
YC Demo Day’s Winter 2023 batch features no fewer than four startups that claim to be building “ChatGPT for X.” They’re all chasing after a customer service software market that’ll be worth $58.1 billion by 2023, assuming the rather optimistic prediction from Acumen Research comes true.
Here are the YC-backed startups that caught our eye:
- Yuma, whose customer demographic is primarily Shopify merchants, provides ChatGPT-like AI systems that integrate with help desk software, suggesting drafts of replies to customer tickets.
- Baselit, which uses one of OpenAI’s text-understanding models to allow businesses to embed chatbot-style analytics for their customers.
- Lasso customers send descriptions or videos of the processes they’d like to automate and the company combines ChatGPT-like interface with robotic process automation (RPA) and a Chrome extension to build out those automations.
- BerriAI, whose platform is designed to help developers spin up ChatGPT apps for their organization data through various data connectors.
April 1, 2023
Italy orders ChatGPT to be blocked
OpenAI has started geoblocking access to its generative AI chatbot, ChatGPT, in Italy.
Italy’s data protection authority has just put out a timely reminder that some countries do have laws that already apply to cutting edge AI: it has ordered OpenAI to stop processing people’s data locally with immediate effect. The Italian DPA said it’s concerned that the ChatGPT maker is breaching the European Union’s General Data Protection Regulation (GDPR), and is opening an investigation.
March 29, 2023
1,100+ signatories signed an open letter asking all ‘AI labs to immediately pause for 6 months’
The letter’s signatories include Elon Musk, Steve Wozniak and Tristan Harris of the Center for Humane Technology, among others. The letter calls on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
The letter reads:
Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.
March 23, 2023
OpenAI connects ChatGPT to the internet
OpenAI launched plugins for ChatGPT, extending the bot’s functionality by granting it access to third-party knowledge sources and databases, including the web. Available in alpha to ChatGPT users and developers on the waitlist, OpenAI says that it’ll initially prioritize a small number of developers and subscribers to its premium ChatGPT Plus plan before rolling out larger-scale and API access.
March 14, 2023
OpenAI launches GPT-4, available through ChatGPT Plus
GPT-4 is a powerful image- and text-understanding AI model from OpenAI. Released March 14, GPT-4 is available for paying ChatGPT Plus users and through a public API. Developers can sign up on a waitlist to access the API.
March 9, 2023
ChatGPT is available in Azure OpenAI service
ChatGPT is generally available through the Azure OpenAI Service, Microsoft’s fully managed, corporate-focused offering. Customers, who must already be “Microsoft managed customers and partners,” can apply here for special access.
March 1, 2023
OpenAI launches an API for ChatGPT
OpenAI makes another move toward monetization by launching a paid API for ChatGPT. Instacart, Snap (Snapchat’s parent company) and Quizlet are among its initial customers.
February 7, 2023
Microsoft launches the new Bing, with ChatGPT built in
At a press event in Redmond, Washington, Microsoft announced its long-rumored integration of OpenAI’s GPT-4 model into Bing, providing a ChatGPT-like experience within the search engine. The announcement spurred a 10x increase in new downloads for Bing globally, indicating a sizable consumer demand for new AI experiences.
February 1, 2023
OpenAI launches ChatGPT Plus, starting at $20 per month
After ChatGPT took the internet by storm, OpenAI launched a new pilot subscription plan for ChatGPT called ChatGPT Plus, aiming to monetize the technology starting at $20 per month.
December 8, 2022
ShareGPT lets you easily share your ChatGPT conversations
A week after ChatGPT was released into the wild, two developers — Steven Tey and Dom Eccleston — made a Chrome extension called ShareGPT to make it easier to capture and share the AI’s answers with the world.
November 30, 2022
ChatGPT first launched to the public as OpenAI quietly released GPT-3.5
GPT-3.5 broke cover with ChatGPT, a fine-tuned version of GPT-3.5 that’s essentially a general-purpose chatbot. ChatGPT can engage with a range of topics, including programming, TV scripts and scientific concepts.
What is ChatGPT? How does it work?
ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI. The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text.
When did ChatGPT get released?
November 30, 2022 is when ChatGPT was released for public use.
What is the latest version of ChatGPT?
Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4.
Can I use ChatGPT for free?
Who uses ChatGPT?
Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns.
What companies use ChatGPT?
Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool.
Most recently, Microsoft announced at it’s 2023 Build conference that it is integrating it ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT. And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space.
What does GPT mean in ChatGPT?
GPT stands for Generative Pre-Trained Transformer.
What’s the difference between ChatGPT and Bard?
Much like OpenAI’s ChatGPT, Bard is a chatbot that will answer questions in natural language. Google announced at its 2023 I/O event that it will soon be adding multimodal content to Bard, meaning that it can deliver answers in more than just text, responses can give you rich visuals as well. Rich visuals mean pictures for now, but later can include maps, charts and other items.
ChatGPT’s generative AI has had a longer lifespan and thus has been “learning” for a longer period of time than Bard.
What is the difference between ChatGPT and a chatbot?
A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions.
ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt.
Can ChatGPT write essays?
Can ChatGPT commit libel?
Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel.
We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry.
Does ChatGPT have an app?
Yes, there is now a free ChatGPT app that is currently limited to U.S. iOS users at launch. OpenAi says an android version is “coming soon.”
What is the ChatGPT character limit?
It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words.
Does ChatGPT have an API?
Yes, it was released March 1, 2023.
What are some sample everyday uses for ChatGPT?
Everyday examples include programing, scripts, email replies, listicles, blog ideas, summarization, etc.
What are some advanced uses for ChatGPT?
Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc.
How good is ChatGPT at writing code?
It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used.
Can you save a ChatGPT chat?
Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet.
Are there alternatives to ChatGPT?
Yes. There are multiple AI-powered chatbot competitors such as Together, Google’s Bard and Anthropic’s Claude, and developers are creating open source alternatives. But the latter are harder — if not impossible — to run today.
The Google-owned research lab DeepMind claimed that its next LLM, will rival, or even best, OpenAI’s ChatGPT. DeepMind is using techniques from AlphaGo, DeepMind’s AI system that was the first to defeat a professional human player at the board game Go, to make a ChatGPT-rivaling chatbot called Gemini.
Apple is developing AI tools to challenge OpenAI, Google and others. The tech giant created a chatbot that some engineers are internally referring to as “Apple GPT,” but Apple has yet to determine a strategy for releasing the AI to consumers.
How does ChatGPT handle data privacy?
OpenAI has said that individuals in “certain jurisdictions” (such as the EU) can object to the processing of their personal information by its AI models by filling out this form. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”.
The web form for making a deletion of data about you request is entitled “OpenAI Personal Data Removal Request”.
What controversies have surrounded ChatGPT?
Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamine (meth) and the incendiary mixture napalm.
An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service.
CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect.
Several major school systems and colleges, including New York City Public Schools, have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with.
There have also been cases of ChatGPT accusing individuals of false crimes.
Where can I find examples of ChatGPT prompts?
Can ChatGPT be detected?
Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best.
Are ChatGPT chats public?
No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service.
Who owns the copyright on ChatGPT-created content or media?
The user who requested the input from ChatGPT is the copyright owner.
What lawsuits are there surrounding ChatGPT?
None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT.
Are there issues regarding plagiarism with ChatGPT?
Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data.
Miranda Bogen is creating solutions to help govern AI
To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.
Miranda Bogen is the founding director of the Center of Democracy and Technology’s AI Governance Lab, where she works to help create solutions that can effectively regulate and govern AI systems. She helped guide responsible AI strategies at Meta and previously worked as a senior policy analyst at the organization Uptown, which seeks to use tech to advance equity and justice.
Briefly, how did you get your start in AI? What attracted you to the field?
I was drawn to work on machine learning and AI by seeing the way these technologies were colliding with fundamental conversations about society — values, rights, and which communities get left behind. My early work exploring the intersection of AI and civil rights reinforced for me that AI systems are far more than technical artifacts; they are systems that both shape and are shaped by their interaction with people, bureaucracies, and policies. I’ve always been adept at translating between technical and non-technical contexts, and I was energized by the opportunity to help break through the appearance of technical complexity to help communities with different kinds of expertise shape the way AI is built from the ground up.
What work are you most proud of (in the AI field)?
When I first started working in this space, many folks still needed to be convinced AI systems could result in discriminatory impact for marginalized populations, let alone that anything needed to be done about those harms. While there is still too wide a gap between the status quo and a future where biases and other harms are tackled systematically, I’m gratified that the research my collaborators and I conducted on discrimination in personalized online advertising and my work within the industry on algorithmic fairness helped lead to meaningful changes to Meta’s ad delivery system and progress toward reducing disparities in access to important economic opportunities.
How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
I’ve been lucky to work with phenomenal colleagues and teams who have been generous with both opportunities and sincere support, and we tried to bring that energy into any room we found ourselves in. In my most recent career transition, I was delighted that nearly all of my options involved working on teams or within organizations led by phenomenal women, and I hope the field continues to lift up the voices of those who haven’t traditionally been centered in technology-oriented conversations.
What advice would you give to women seeking to enter the AI field?
The same advice I give to anyone who asks: find supportive managers, advisors, and teams who energize and inspire you, who value your opinion and perspective, and who put themselves on the line to stand up for you and your work.
What are some of the most pressing issues facing AI as it evolves?
The impacts and harms AI systems are already having on people are well-known at this point, and one of the biggest pressing challenges is moving beyond describing the problem to developing robust approaches for systematically addressing those harms and incentivizing their adoption. We launched the AI Governance Lab at CDT to drive progress in both directions.
What are some issues AI users should be aware of?
For the most part, AI systems are still missing seat belts, airbags, and traffic signs, so proceed with caution before using them for consequential tasks.
What is the best way to responsibly build AI?
The best way to responsibly build AI is with humility. Consider how the success of the AI system you are working on has been defined, who that definition serves, and what context may be missing. Think about for whom the system might fail and what will happen if it does. And build systems not just with the people who will use them but with the communities who will be subject to them.
How can investors better push for responsible AI?
Investors need to create room for technology builders to move more deliberately before rushing half-baked technologies to market. Intense competitive pressure to release the newest, biggest, and shiniest new AI models is leading to concerning underinvestment in responsible practices. While uninhibited innovation sings a tempting siren song, it is a mirage that will leave everyone worse off.
AI is not magic; it’s just a mirror that is being held up to society. If we want it to reflect something different, we’ve got work to do.
This Week in AI: Addressing racism in AI image generators
Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.
This week in AI, Google paused its AI chatbot Gemini’s ability to generate images of people after a segment of users complained about historical inaccuracies. Told to depict “a Roman legion,” for instance, Gemini would show an anachronistic, cartoonish group of racially diverse foot soldiers while rendering “Zulu warriors” as Black.
It appears that Google — like some other AI vendors, including OpenAI — had implemented clumsy hardcoding under the hood to attempt to “correct” for biases in its model. In response to prompts like “show me images of only women” or “show me images of only men,” Gemini would refuse, asserting such images could “contribute to the exclusion and marginalization of other genders.” Gemini was also loath to generate images of people identified solely by their race — e.g. “white people” or “black people” — out of ostensible concern for “reducing individuals to their physical characteristics.”
Right wingers have latched on to the bugs as evidence of a “woke” agenda being perpetuated by the tech elite. But it doesn’t take Occam’s razor to see the less nefarious truth: Google, burned by its tools’ biases before (see: classifying Black men as gorillas, mistaking thermal guns in Black people’s hands as weapons, etc.), is so desperate to avoid history repeating itself that it’s manifesting a less biased world in its image-generating models — however erroneous.
In her best-selling book “White Fragility,” anti-racist educator Robin DiAngelo writes about how the erasure of race — “color blindness,” by another phrase — contributes to systemic racial power imbalances rather than mitigating or alleviating them. By purporting to “not see color” or reinforcing the notion that simply acknowledging the struggle of people of other races is sufficient to label oneself “woke,” people perpetuate harm by avoiding any substantive conservation on the topic, DiAngelo says.
Google’s ginger treatment of race-based prompts in Gemini didn’t avoid the issue, per se — but disingenuously attempted to conceal the worst of the model’s biases. One could argue (and many have) that these biases shouldn’t be ignored or glossed over, but addressed in the broader context of the training data from which they arise — i.e. society on the world wide web.
Yes, the data sets used to train image generators generally contain more white people than Black people, and yes, the images of Black people in those data sets reinforce negative stereotypes. That’s why image generators sexualize certain women of color, depict white men in positions of authority and generally favor wealthy Western perspectives.
Some may argue that there’s no winning for AI vendors. Whether they tackle — or choose not to tackle — models’ biases, they’ll be criticized. And that’s true. But I posit that, either way, these models are lacking in explanation — packaged in a fashion that minimizes the ways in which their biases manifest.
Were AI vendors to address their models’ shortcomings head on, in humble and transparent language, it’d go a lot further than haphazard attempts at “fixing” what’s essentially unfixable bias. We all have bias, the truth is — and we don’t treat people the same as a result. Nor do the models we’re building. And we’d do well to acknowledge that.
Here are some other AI stories of note from the past few days:
- Women in AI: TechCrunch launched a series highlighting notable women in the field of AI. Read the list here.
- Stable Diffusion v3: Stability AI has announced Stable Diffusion 3, the latest and most powerful version of the company’s image-generating AI model, based on a new architecture.
- Chrome gets GenAI: Google’s new Gemini-powered tool in Chrome allows users to rewrite existing text on the web — or generate something completely new.
- Blacker than ChatGPT: Creative ad agency McKinney developed a quiz game, Are You Blacker than ChatGPT?, to shine a light on AI bias.
- Calls for laws: Hundreds of AI luminaries signed a public letter earlier this week calling for anti-deepfake legislation in the U.S.
- Match made in AI: OpenAI has a new customer in Match Group, the owner of apps including Hinge, Tinder and Match, whose employees will use OpenAI’s AI tech to accomplish work-related tasks.
- DeepMind safety: DeepMind, Google’s AI research division, has formed a new org, AI Safety and Alignment, made up of existing teams working on AI safety but also broadened to encompass new, specialized cohorts of GenAI researchers and engineers.
- Open models: Barely a week after launching the latest iteration of its Gemini models, Google released Gemma, a new family of lightweight open-weight models.
- House task force: The U.S. House of Representatives has founded a task force on AI that — as Devin writes — feels like a punt after years of indecision that show no sign of ending.
More machine learnings
AI models seem to know a lot, but what do they actually know? Well, the answer is nothing. But if you phrase the question slightly differently… they do seem to have internalized some “meanings” that are similar to what humans know. Although no AI truly understands what a cat or a dog is, could it have some sense of similarity encoded in its embeddings of those two words that is different from, say, cat and bottle? Amazon researchers believe so.
Their research compared the “trajectories” of similar but distinct sentences, like “the dog barked at the burglar” and “the burglar caused the dog to bark,” with those of grammatically similar but different sentences, like “a cat sleeps all day” and “a girl jogs all afternoon.” They found that the ones humans would find similar were indeed internally treated as more similar despite being grammatically different, and vice versa for the grammatically similar ones. OK, I feel like this paragraph was a little confusing, but suffice it to say that the meanings encoded in LLMs appear to be more robust and sophisticated than expected, not totally naive.
Neural encoding is proving useful in prosthetic vision, Swiss researchers at EPFL have found. Artificial retinas and other ways of replacing parts of the human visual system generally have very limited resolution due to the limitations of microelectrode arrays. So no matter how detailed the image is coming in, it has to be transmitted at a very low fidelity. But there are different ways of downsampling, and this team found that machine learning does a great job at it.
“We found that if we applied a learning-based approach, we got improved results in terms of optimized sensory encoding. But more surprising was that when we used an unconstrained neural network, it learned to mimic aspects of retinal processing on its own,” said Diego Ghezzi in a news release. It does perceptual compression, basically. They tested it on mouse retinas, so it isn’t just theoretical.
An interesting application of computer vision by Stanford researchers hints at a mystery in how children develop their drawing skills. The team solicited and analyzed 37,000 drawings by kids of various objects and animals, and also (based on kids’ responses) how recognizable each drawing was. Interestingly, it wasn’t just the inclusion of signature features like a rabbit’s ears that made drawings more recognizable by other kids.
“The kinds of features that lead drawings from older children to be recognizable don’t seem to be driven by just a single feature that all the older kids learn to include in their drawings. It’s something much more complex that these machine learning systems are picking up on,” said lead researcher Judith Fan.
Chemists (also at EPFL) found that LLMs are also surprisingly adept at helping out with their work after minimal training. It’s not just doing chemistry directly, but rather being fine-tuned on a body of work that chemists individually can’t possibly know all of. For instance, in thousands of papers there may be a few hundred statements about whether a high-entropy alloy is single or multiple phase (you don’t have to know what this means — they do). The system (based on GPT-3) can be trained on this type of yes/no question and answer, and soon is able to extrapolate from that.
It’s not some huge advance, just more evidence that LLMs are a useful tool in this sense. “The point is that this is as easy as doing a literature search, which works for many chemical problems,” said researcher Berend Smit. “Querying a foundational model might become a routine way to bootstrap a project.”
Last, a word of caution from Berkeley researchers, though now that I’m reading the post again I see EPFL was involved with this one too. Go Lausanne! The group found that imagery found via Google was much more likely to enforce gender stereotypes for certain jobs and words than text mentioning the same thing. And there were also just way more men present in both cases.
Not only that, but in an experiment, they found that people who viewed images rather than reading text when researching a role associated those roles with one gender more reliably, even days later. “This isn’t only about the frequency of gender bias online,” said researcher Douglas Guilbeault. “Part of the story here is that there’s something very sticky, very potent about images’ representation of people that text just doesn’t have.”
With stuff like the Google image generator diversity fracas going on, it’s easy to lose sight of the established and frequently verified fact that the source of data for many AI models shows serious bias, and this bias has a real effect on people.
Humane pushes Ai Pin ship date to mid-April
Hardware is difficult, to paraphrase a famous adage. First-generation products from new startups are notoriously so, regardless of how much money and excitement you’ve managed to drum up. Given all that, it’s likely few are too surprised that Humane’s upcoming Ai Pin has been pushed back a bit, from March to “mid-April,” per a new video from the Bay Area startup’s Head of Media, Sam Sheffer.
In the Sorkin-style walk and talk, he explains that the first units are set to, “start leaving the factory at the end of March.” If Humane keeps to that time frame, “priority access” customers will begin to receive the unit at some point in mid-April. The remaining preorders, meanwhile, should arrive “shortly after.”
Humane captured a good deal of tech buzz well before its first product was announced, courtesy of its founders’ time at Apple and some appropriately enigmatic prelaunch videos. The Ai Pin was finally unveiled at an event in San Francisco back in early November, where we were able to spend a little controlled hands-on time with the wearable.
The device is the first prominent example of what’s likely to be a growing trend in the consumer hardware world, as more startups look to harness the white-hot world of generative AI for new form factors. Humane is positioning its product as the next step for a space that’s been stuck on the smartphone form factor for more than a decade.
Of course, this will almost certainly also be the year of the “AI smartphone” — that is to say handsets leveraging platforms’ GPT models from companies like OpenAI, Google and Microsoft to bring new methods for interacting with consumer devices. Meanwhile, upstart rabbit generated buzz last month at CES for its own unique take on the generative AI-first consumer device.
For its part, Humane has a lot riding on this launch. The company has thus far raised around $230 million, including last year’s $100 million Series C. There’s a lot to be said for delaying a product until it’s consumer ready. While early adopters are — to an extent — familiar with first-gen bugs, there’s always a limit to such patience. At the very least, a product like this will need to do most of what it’s supposed to do most of the time.
During CES, the company announced that it had laid off 10 employees, amounting to 10% of its total workforce. That’s not a huge number for a startup of that size, but it’s absolutely notable when it occurs at a well-funded company at a time when it needs to project confidence to consumers and investors, alike.
The Ai Pin is currently available for preorder at $699. Those who do so prior to March 31 will get three months of the device’s $24/month subscription service for free.
Axie Infinity Co-Founder Loses $9.7m 3,248 Ethereum Wallet Hack
Digital Yuan Now Used to Complete Car Pre-purchase Payments
Worldcoin (WLD) Token Hits New Highs AI Coins Rally By 21%
‘Status Symbol’: As Splinter Faction Claims to Be ‘Real JD(S)’, Is Party Headed Shiv Sena, AIADMK, NCP Way?
Prime Minister Narendra Modi To Launch Projects Worth ₹21,500 crore In Poll Bound Telangana | News18
Diving Into Beauty: Ocean Photographer Of The Year 2023 Reveals Stunning Marine Shots
Politics2 months ago
‘Status Symbol’: As Splinter Faction Claims to Be ‘Real JD(S)’, Is Party Headed Shiv Sena, AIADMK, NCP Way?
Politics5 months ago
Prime Minister Narendra Modi To Launch Projects Worth ₹21,500 crore In Poll Bound Telangana | News18
Tech5 months ago
Diving Into Beauty: Ocean Photographer Of The Year 2023 Reveals Stunning Marine Shots
Cryptocurrency4 months ago
Recent Funding Rounds for Crypto Projects & Companies on October 19th💼🚀
Sports5 months ago
Cricket World Cup Warm-up: Australia Edge Pakistan As India Go 3,400km For World Cup Washout
Entertainment4 months ago
Coronation Street star Chris Gascoyne teases relapse for Peter
Cryptocurrency4 months ago
Recent Funding Rounds for Crypto Projects & Companies on Oct 18th🚀💰
Tech5 months ago
Chinese Scientist Casts Doubt On Chandrayaan-3 Moon Landing, Says ‘India Never Landed On Moon’s South Pole’