These new ChatGPT capabilities are SCARY-GOOD š² Hereās what you need to know
Build LLM Apps that can see, hear & speak
Over the last 48 hours, OpenAI has released some crazy-cool (and a little bit scary) new ChatGPT features that you really need to know about.
Iāll break āem down for you real quick and then show you where you can go to learn to start building apps that leverage these mind-blowing new capabilities.
Feature 1: ChatGPT gets unfettered internet access
If youāve been using ChatGPT on any sort of regular basis, then youāve certainly run up against its warning that the data it was trained on was only current up to January 2022.
For those of us who use ChatGPT as a research tool, that information limitation was quite a hinderance, UNTIL NOW.
Over on Twitter yesterday, OpenAI announced that āChatGPT can now browse the internet to provide you with current and authoritative information, complete with direct links to sources.ā
That means that we can now get direct access to up-to-date research findings without needing to comb the entire internet manually, article-by-article! Major win!
Another thing I very much appreciate about this update is OpenAI is FINALLY attributing IP back to creatorsā¦ itās a step in the right direction on several fronts.
Source: OpenAI on X
Feature 2: ChatGPT gets ears, eyes & a mouth
On top of real-time internet access, OpenAI has also imbued ChatGPT with capabilities for image information processing and complete audio interactivity.Ā
Yep, you can now do things like:Ā
Talk to the application and it will speak back to you with answers it has inferred directly from live updating data straight from the internet.
Upload images of broken objects and have the application tell you what is wrong with the object and steps you can take to fix it.
Upload paper-and-pen webpage mockups and have the application output code you can use to build that exact configuration on the web.
So, yeh - this is all sort of majorā¦
ā ļø Warning: Donāt worry if you donāt see these updates quiet yet in your version of ChatGPT. Theyāre being rolled out slowly, and I myself only have access to the image upload feature so far.
A question for youā¦
š [LIVE TRAINING] How to Build LLM Apps That Can See, Hear, Speak
Now that you know what ChatGPTās new features are and how theyāre helpful, I want to invite you to a FREE live training session where you can start learning to build with these features right away.
Join us tomorrow live at 10 am PDT to learn more about building multi-modal LLM Apps using OpenAI.
In this 60-minute all-star training demo, youāll learn to build generative AI apps that make sense of text inputs, while also assimilating and generating audio all on their own. š²Ā
Youāll leverage voice recognition capabilities with OpenAI embeddings via an intuitive user interface that you can quickly set up on your own personal datastore.
What Youāll Learn In This Live Training Demo:
š How-to fetch juicy company news snippets with the trusty ārequestsā library.
š Explore the art of intertwining questions and answers to amplify user engagement.
š Witness the wonder of using voice commands to query data from your system!
š Get a first look at OpenAI's brand-new voice and image capabilities.
š And, a sneak peek how-to training on utilizing the new text-to-speech model for generating human-like audio.
You canāt afford to miss this!
If you canāt join live, sign up now and weāll send you the replay.
Yours Truly,
Lillian Pierson
PS. If you liked this newsletter, please consider referring a friend!
Disclaimer: This email may include sponsored content or affiliate links and I may possibly earn a small commission if you purchase something after clicking the link. Thank you for supporting small business ā„ļø.