As you’ve undoubtedly noticed, AI-related news is everywhere, and its impact continues to grow. Just last week, OpenAI released an iOS version of ChatGPT (an Android version is coming soon) that runs directly on your iPhone and adds the ability to speak your request for information into its interactive chatbot user interface.
Now, Microsoft has announced that it’s bringing a range of new generative AI-powered features to Windows 11 starting in June. The main component is called Windows Copilot, a set of text-driven assistive capabilities that make the process of using your PC easier and more intuitive.
The company also announced the ability to integrate Bing Chat plug-ins into Windows, meaning that many of the impressive capabilities that Microsoft brought to its Bing search engine will be available directly in Windows.
Bing gets ChatGPTNew Bing with ChatGPT brings the power of AI to Microsoft’s signature search engine
‘RIP Siri:’ChatGPT on Apple iPhone and iPad upgrades intelligence on smartphone, experts say
When does Windows Copilot launch?
Windows Copilot will be available in preview to beta testers in June. It will be available to the general public later this year.
How does Windows Copilot work?
Clicking on a new icon located in the Windows taskbar will open a sidebar window where you can type in requests. These can come in the form of classic internet search questions like, “Who won the Giants game yesterday?” or “What are the ingredients in tiramisu?”
In addition, you can ask Windows Copilot to change settings within Windows like turn on dark mode and start a focus session. You can also perform actions on your PC, such as dragging and dropping files from Windows Explorer into the Copilot window and have it immediately summarize the results.
It’s these latter capabilities that are particularly interesting, especially when the intelligence lurking behind Copilot starts to kick in. Imagine a future, for example, where you can request that your computer find information on a particular topic, have it neatly summarize what it discovers into a simple paragraph, and then pasting that into a new (or existing) document.
Or, how about asking your Windows PC to find a time to schedule a meeting with colleagues or a dinner with friends and automatically sending out the invitations?
While these are just a few simple concepts, they hint at the kind of transformational capabilities and new ways of interacting with your computer that have so many people (and the entire tech industry) so excited about the possibilities of generative AI.
Can AI be used offline?
Windows Copilot features and ChatGPT apps for mobile phones show how generative AI applications are quickly moving from the cloud directly onto our devices. When you’re using these applications, most of the work is still being done in the cloud − meaning that you must have an internet connection for them to work.
Now we’re starting to hear discussions about migrating some of these features into capabilities that can use the computing features of our devices and run locally on them.
This probably won’t matter to the vast majority of people. After all, you just want to get something done, and you don’t really care where it happens or how it works.
It turns out, though, that there are some important implications surrounding where the “work” is done that make it worth understanding. The way the computing efforts are distributed across different locations has a direct impact on things like the pricing, availability, security, and privacy of these applications and services.
As cool and exciting as these generative AI applications can be, they are quickly becoming somewhat notorious for being power hogs since it takes a whole lot of very powerful computer servers to run generative AI tasks.
The more people who want to use these features and the more services that are available, the higher the demand for computers hosted at cloud computing providers to run them and energy to power them.
None of those things come for free, however, so at some point, companies are likely going to pass along some of the costs to consumers and businesses who use these services. By shifting some of the computing work onto our devices, however, they can reduce these cloud-based computing demands and, therefore, their costs. In the end, that means that (hopefully) they won’t pass as much, or even any, of the costs onto users of generative AI applications and services.
USA TODAY columnist Bob O’Donnell is the president and chief analyst of TECHnalysis Research, a market research and consulting firm. You can follow him on Twitter @bobodtech.