We are only scratching the surface of how the technology will transform businesses.
Artificial intelligence (AI) was the buzzword on everyone’s lips this year—the hot investment, the future that has finally arrived, society’s downfall or its savior. In many ways, it remains a solution in search of a problem, with some of the applications most accessible to ordinary people being novelties like the many-fingered amalgamations of stock photos that can be generated with a few keywords or prompts. Many social media users had fun posting screenshots of AI-assisted search engine “hallucinations,” confidently informing people that strong passwords include the user’s name and birthday or that gasoline can be used in recipes (though not recommended for home cooking because it is “highly flammable”). Some of these hiccups were serious, but even the inconsequential errors added to public perception that generative AI is an immature technology, a marketing gimmick, and an investor cash-grab. But quietly, 2024 was the year AI became a real part of the workflow in many industries, including office technology.
Current and Future AI Applications
West McDonald, founder of GoWest.AI, a consulting firm specializing in helping office equipment dealers understand and adopt this new technology, saw growth take off this year. This was especially true in the MSP space, where offering AI solutions has been a natural fit with their other managed services offerings such as cloud services and cybersecurity.
“In the office equipment channel, there’s been a little more hesitancy,” McDonald said. “One of the things that I’ve seen people looking for is the application. Something that’s built specifically for the office equipment channel, and that certainly doesn’t exist yet.”
“What the LLMS can do today is not going to be what they can do tomorrow,” said Greg Walters, CEO of Greg Walters, Inc. and publisher of The Greg Report, a weekly roundup of news and analysis of the AI industry. “It’s tough for a dealership to say we want to throw artificial intelligence at our problems, but the solutions change every three or four months. It’s a moving target on both sides.”
This issue is echoed in discussions that AI industry experts have been having all year. The initial successes of AI were wide-spectrum offerings like OpenAI’s ChatGPT or Microsoft Copilot, comprehensive platforms meant to aid most users in doing most things. In a conversation on the Washington Post’s technology podcast The Futurist, AOL founder Steve Case predicted that the next wave of progress will be in verticalizing AI—smaller programs developed by smaller tech firms to sell to smaller market sectors, tailored to the fine details of that specific industry, including the often idiosyncratic ways people in a particular industry use language. A managed services dealer and a meteorologist will mean very different things when they say “cloud,” for instance, and verticalized AI will be built with that understanding baked in.
AI Drivers
One of the most essential steps to developing large language model (LLM) applications verticalized to the office equipment industry is buy-in from leading companies, and 2024 is the year when that began to pick up momentum. In October, Toshiba in Japan announced the launch of Toshiba Tec AI Innovation Hub, a task force dedicated to promoting the use of generative AI. Toshiba Tec will focus on increasing productivity through automation and optimization of AI tools and researching and disseminating information about developments in the field. Similarly, Ricoh Japan launched an AI evangelist training program to drive internal AI adoption and propose AI-driven business improvements for clients.
While LLMs will serve as the bedrock of AI applications for a long time to come, the way they’re used is already starting to evolve into something more specialized. “We’re seeing the shift from general model LLMs into these large action models or AI agents as they’re called,” said McDonald. “If you look at some of the big players—like Microsoft, SalesForce, OpenAI,—they’re all starting to release these AI agents. It’s pretty exciting because the big change is that instead of just chatting with it or getting it to write a better email, these agents are self-learning.”
In this context, self-learning means the AI can train itself to a degree, observing patterns in data and finding ways to optimize workflows. AI agents will, within the parameters defined by the company using them, analyze information, make decisions based on that analysis, and take actions on their own.
“They’re still built on the framework of large language models; it’s just that they’re a lot smarter and self-learning. As clever as ChatGPT or cloud have been, they don’t think or make decisions,” said McDonald, though with a note of caution. “In the early days of LLMs you had to start small, and certainly with AI agents it’s going to be the same thing.”
“I can foresee dealerships and MSPs taking level one and level two phone calls with an AI bot,” predicts Walters. “It might take an OEM to roll that out first so they start buying into it, but if they don’t then other industries are going to pick it up.”
One of the keys to making productive use of AI is understanding what the technology can and can’t do well. Some of the applications that seem like obvious fits aren’t well-suited to the actual capabilities of LLMs. “In the early days of using ChatGPT everyone said ‘Oh, it’s great, you’ve got to start using it for writing emails and blogs,’” said McDonald. “My honest impression is that AI is a terrible writer.”
Current-gen AI writers certainly have their tics. AI loves to use the word “realm,” and to invite the reader to “imagine a world.” It uses hyphens and em-dashes prodigiously, like an impatient teen who doesn’t feel like figuring out what punctuation correctly conveys their meaning. It loves the word “conveys” too, but don’t worry, this publication still uses human writers. It nearly always affects a certain voice, half high-school C-student, half LinkedIn influencer. Public LLMs were trained on the most of internet-accessible writing, not the best.
“As soon as you get used to its character and tone, you can sniff it out a thousand miles away,” laughed McDonald.
Problem Solving
The takeaway here, arriving rather embarrassingly late in 2024 for many AI users, is to use a tool for what it does. Don’t curse the wrench for being a lousy hammer. But to do that, companies will first have to take the time to figure out what problem they’re actually trying to solve. Technology, both physical and virtual, is meant to relieve humans from repetition and toil. We use photocopiers because copying things by hand is tedious and error prone. Look at a medieval manuscript sometime and see how often the monks’ minds wandered, and they started doodling snails in the margins of their sacred texts.
The maxim of garbage in, garbage out still holds. AI applications need high-quality and sector-specific data, and that data requires regular evaluation by real people to ensure that both qualities exist. Training AI isn’t as set-it-and-forget-it as some of the field’s evangelists would have you believe. Expertise, both in how LLMs function and in the domain knowledge relevant to the application, is non-negotiable, and that begins with making a properly informed decision to adopt AI in the first place.
“The mistake I see a lot of people make is they haven’t asked the first question: Where does this business need help?” said McDonald. “That’s the first place to start. Find the problem and then use AI to fix the problem.”