These new ChatGPT capabilities are SCARY-GOOD 😲 Here’s what you need to know
Build LLM Apps that can see, hear & speak
Over the last 48 hours, OpenAI has released some crazy-cool (and a little bit scary) new ChatGPT features that you really need to know about.
I’ll break ‘em down for you real quick and then show you where you can go to learn to start building apps that leverage these mind-blowing new capabilities.
Feature 1: ChatGPT gets unfettered internet access
If you’ve been using ChatGPT on any sort of regular basis, then you’ve certainly run up against its warning that the data it was trained on was only current up to January 2022.
For those of us who use ChatGPT as a research tool, that information limitation was quite a hinderance, UNTIL NOW.
Over on Twitter yesterday, OpenAI announced that “ChatGPT can now browse the internet to provide you with current and authoritative information, complete with direct links to sources.”
That means that we can now get direct access to up-to-date research findings without needing to comb the entire internet manually, article-by-article! Major win!
Another thing I very much appreciate about this update is OpenAI is FINALLY attributing IP back to creators… it’s a step in the right direction on several fronts.
Source: OpenAI on X
Feature 2: ChatGPT gets ears, eyes & a mouth
On top of real-time internet access, OpenAI has also imbued ChatGPT with capabilities for image information processing and complete audio interactivity.
Yep, you can now do things like:
Talk to the application and it will speak back to you with answers it has inferred directly from live updating data straight from the internet.
Upload images of broken objects and have the application tell you what is wrong with the object and steps you can take to fix it.
Upload paper-and-pen webpage mockups and have the application output code you can use to build that exact configuration on the web.
So, yeh - this is all sort of major…
⚠️ Warning: Don’t worry if you don’t see these updates quiet yet in your version of ChatGPT. They’re being rolled out slowly, and I myself only have access to the image upload feature so far.
A question for you…
📆 [LIVE TRAINING] How to Build LLM Apps That Can See, Hear, Speak
Now that you know what ChatGPT’s new features are and how they’re helpful, I want to invite you to a FREE live training session where you can start learning to build with these features right away.
Join us tomorrow live at 10 am PDT to learn more about building multi-modal LLM Apps using OpenAI.
In this 60-minute all-star training demo, you’ll learn to build generative AI apps that make sense of text inputs, while also assimilating and generating audio all on their own. 😲
You’ll leverage voice recognition capabilities with OpenAI embeddings via an intuitive user interface that you can quickly set up on your own personal datastore.
What You’ll Learn In This Live Training Demo:
🌟 How-to fetch juicy company news snippets with the trusty ‘requests’ library.
🌟 Explore the art of intertwining questions and answers to amplify user engagement.
🌟 Witness the wonder of using voice commands to query data from your system!
🌟 Get a first look at OpenAI's brand-new voice and image capabilities.
🌟 And, a sneak peek how-to training on utilizing the new text-to-speech model for generating human-like audio.
You can’t afford to miss this!
If you can’t join live, sign up now and we’ll send you the replay.
Yours Truly,
Lillian Pierson
PS. If you liked this newsletter, please consider referring a friend!
Disclaimer: This email may include sponsored content or affiliate links and I may possibly earn a small commission if you purchase something after clicking the link. Thank you for supporting small business ♥️.