Building better “makeware” — software for people who make things

Caitlin Pequignot
16 min readJan 4, 2024

--

An abstract, 3D image of a creative person trying to build something with an iPad-type device.

In the diverse world of “makeware” — software that people can use to build other software or creative works — a lot rests on the shoulders of the users who are creating. But there are many opportunities for product teams to ensure their success. I’m an ex-Airtable UXR, product strategist, and musician with professional and personal experience improving and using makeware tools. These are my observations on how we can improve this kind of software for everyone that dares to make something new for others.

What is makeware?

Makeware is a term I’m using to refer to products that let people build software, workflows, or creative works (tools like Airtable, Zapier, Webflow, Squarespace, Adobe, or even Garageband). If your company is building a tool that people use to make something for other people, whether on their team or the broader world, I’m referring to that as makeware for the purposes of this discussion.

Who uses makeware?

While granularities and subgroups certainly exist depending on the use case or industry of specific makeware, we can generally group makeware users into the following groups of people, or “proto-personas”:

  • Creators — people who use the makeware to build or create products and tools for themselves or others
  • End users — people who use the tools that creators make to do their work, or people who enjoy the content they create
  • Stakeholders — for makeware that produces tools or workflows, people who have some decision-making influence over the makeware and the tools it generates

In my time at Airtable, I fleshed out personas for each of these groups, including creator subgroups that were relevant to the lifecycle of the tools they made. For the purposes of this discussion, I want to focus in on the needs of creators, since that’s the group on which the success of the makeware tools and the people they serve rests most. Without someone to build with the needs of the team in mind, the creation risks not being used or enjoyed by the people it was made for.

What determines if people are successful with using makeware?

First, we need to define what “success” looks like for people building things with makeware. For some, it might be that the apps, websites, or tools made with the makeware are getting a certain amount of end user engagement. For others, it might be that a process is achieving some meaningful percent reduction in time or money saved. However that success is defined, the builder has to determine what they need to make and how to use the makeware to make their solution real.

Along that builder journey, I have found there are three key dimensions that affect builder success.

  • The complexity of what they are making
  • The skill or familiarity that they have with what they are making
  • How usable the makeware is

Let’s model this by thinking about a successful makeware building experience as being like a road trip. The complexity of what people are making is the route for the trip. Is it a quick trip to the store (a simple project management app, one-page marketing website, or beginner Garageband song)? Or is it a multi-day trek over rough terrain (a workflow across multiple teams, an enterprise eCommerce site, or an orchestral score recording)?

This is a diagram showing two building goals. One is simple, a straight line with a car above it leading from abstract idea, to trying and building, and then to deployed solution or content, leading to the retail store Target. The other building goal is complex, with a squiggly line leading to Yosemite National Park.
May we all find the simplest road to Target.

Let’s use the complex roadtrip here to illustrate the most extreme example of user frustration. The skill or familiarity that users have with what they’re making, in this example, is fairly easy to map — it’s who is driving the car. An experienced driver who has taken many road trips likely won’t have much difficulty navigating the twists and turns of this complex journey — even if they might get frustrated sometimes. But a teen who has just gotten their permit — or Chevy Chase in National Lampoon — is likely going to be in for a rough journey, and it might take them much longer than it would take the experienced driver to get to their destination.

This is a diagram showing the journey differences between an experienced user and a less experienced user. Vin Diesel from the Fast and the Furious “drives” a car to Yosemite and has no issue. Chevy Chase from National Lampoon has much a harder time.
Good thing Chevy Chase was never chased by Vin Diesel.

But wait. Let’s not strand this inexperienced driver and put them at risk. Let’s give them a self-driving car that can make the road trip for them! In this way, we can model the usability of the makeware as the car itself. Chevy Chase operating a self-driving car might have an easier time getting to their destination. But Chevy Chase stuck with a stick-shift might never leave the parking lot.

This is a diagram showing the differences between a more usable software vs. a less usable one. In one, Chevy Chase is driven to Yosemite in an electric car. In the other, he is stuck in a valley of troubleshooting with a stick shift car.
Well, he got further than I would have in a stick shift!

To bring this back to what product teams can control, it’s important to remember this: we arguably have the least control over the skill or familiarity our users bring to the table. But we can make our cars — our makeware itself — more usable for them. That way, more people can be successful with our tools, regardless of their building goal.

But what about the complexity or use case of the goal itself? Do product teams have any control over that? To some extent, yes — teams can choose to focus on user segments with more or less complicated creation goals, depending on business objectives and use case prioritizations. Maybe your business doesn’t care so much about serving the needs of racecar drivers. But if a goal of your makeware is to make creation easier for more people, software can be optimized to reduce how complex it is to achieve a complex goal with the makeware itself. There are a multitude of ways to do this — providing users with templates, AI assistants, and workflow builders.

The key to building makeware for the most people is not only to make it more usable, but also to make intelligent decisions with the user, not for them, to get them to their destination faster and with the least amount of frustration.

Adjusting goal complexity and improving usability smoothes the “tailorability” curve of our makeware products — in simpler terms, it makes more complex functionality easier to attain for more people. This is important because makeware products can be inherently hard to use and take time to master, especially when the building or creation goal is complex. Unfortunately, many people, especially enterprise users, don’t have the skill or time to invest in using the makeware we offer them.

In my time at Airtable, I worked on several projects that endeavored to make Airtable’s power more accessible to people, regardless of their skill level. As a musician and product strategist, I’ve used other makeware tools myself to build products and content for others to enjoy. In doing both, I’ve observed the following product opportunities that I believe are true of products in the makeware space that are worth addressing to make them more accessible for everyone.

How do we make makeware more accessible for the most people?

Over the course of my career and personal experience, I’ve observed that allowing people to make useful mistakes, providing helpful stimuli to react to, maximizing for short yet meaningful product experiences, and leaning into rather than fighting established mental models can catalyze a more enjoyable makeware experience for them. By making experimentation feel cheaper, creation progress more easily recallable, and iterations more proactive, we can give users of all skill levels and goal complexities a more enjoyable makeware experience.

People need to make mistakes to learn

Why it’s important: Iteration is a product mindset but not one we encourage in our users or even prioritize at work. By ignoring it, people get stuck in the creation process and don’t know how to get out.

When I was a math instructor tutoring students at Mathnasium, I didn’t worry that it took a student a few tries to use a number line correctly. With the right scaffolding, a student can use their previous mistakes to reach the right answer later. When I was working as a UXR on a growth team, I didn’t question why we built, ran, and shipped growth experiments — and then design something else if our experiment didn’t go the way we wanted. But as a UXR studying users building things with makeware, I see that this the useful loop of try, fail, learn, repeat — the basic product iteration cycle — isn’t leveraged the way that it could be to help users learn from their inevitable mistakes.

A very clear example of this is how today’s makeware does the bare minimum with allowing users to undo their mistakes, either explicitly in UI or via an implied knowledge of CTRL+Z. But lots can go wrong here. People can have a hard time finding or intuiting the undo ability, seeking a specific action they took several actions ago, or knowing that the system even undid the action at all (a lack of system status).

DialogFlow is a Google product that allows people to build traditional AI chatbot workflows. Unfortunately, it doesn’t handle undo well because it offers no system status of whether or not the undo has actually occurred. Here I delete a test intent and try to get it back first by pressing CTRL-Z off-screen, then on-screen by looking for a “trash” in the sidebar. Neither attempt is successful.

A GIF of the DialogFlow UI is shown here, which is a left sidebar of main menu options, a list of chatbot intents, and an area on the right where users can test the chatbot. Here, an intent is deleted, but it’s difficult for the user to know if that action can be undone when they try because the UI does not show them anything.
DialogFlow, like many builder products, doesn’t show “undo” to the extent that makes users feel comfortable trusting it.

And allowing for undo is really only the bare minimum. In situations where an undone action(s) indicates an exploration and not a simple click error, the interaction is an opportunity for learning and redirection that few products take (though detecting intent like this is certainly easier said than done).

Imagine if DialogFlow were smart enough to have figured out that my deletion of specific training phrases, based on strings or other metadata, was indicative of a common mistake. Its interpretation of my actual intent, correct or not, might lead me to a more successful action (more on this in the next section).

This is a prototype image of a potential DialogFlow redirection. It shows the DialogFlow UI as shown previously, with a modal that says “Test intent” deleted, with a clear blue Undo button. Another modal says, “If you’re making test intent changes, you can do that in the demo center.” This one has a blue button that says, “Try It”.
Note: the “demo center” of DialogFlow doesn’t exist, and neither does my skill as a designer. This is meant to demonstrate a possible redirection only.

Versioning and branching are more powerful evolutions of undo that enable the “what-ifs” of learning to occur. A great example of this that I used in my own initial learning of R is the data exploration, visualization, and statistical analysis tool Exploratory. As a start, Exploratory allows you to go back in time to previous steps you may have applied to your data frame.

This is a GIF of Exploratory’s data analysis UI, which looks like a dashboard with many graphs visualizing the data underneath. The right sidebar shows data manipulation steps, like a filter, in the order that the user implements them. I show how a user can easily revert to a previous manipulation by clicking directly on the previous step.
Going back to a previous data frame in Exploratory is relatively easy.

But Exploratory’s branching allows users to try out analyses in a non-destructive way. If you wanted to go down an analysis rabbit hole that excluded “unknown” airports, for example, you could go off and do that without having to make a separate file.

This is a GIF of the Exploratory UI showing how a user can branch off a data manipulation step without destroying the underlying data.
You can branch off of your main project file in Exploratory like this. I’ve often wished that I could do this in makeware tools, especially in Airtable, Figma, or my music production tool, Ableton Live.

The workaround for the lack of this kind of functionality is well-known: hacking your way through file duplication as a means of trying out ideas.

This is an image of music files on my computer. There are multiple files, all with slightly different names that show versions saved as separate files.
These are Ableton files — songs I’m working on. Sadly, I don’t know what all of these versions are. Generally, the longer the name, the newer the version. Foolproof heuristic.

Unsurprisingly, this kind of file hacking is much more difficult in cloud computing without the ability to save explicit files, but users still find a way.

This is an image of the Airtable homepage, which shows five different bases all with slightly different names, such as Budgets 2023, Budgets V2, Budgets, etc.
Who amongst us is not guilty of something like this?

As anyone who has built something that other people use or consume knows, versioning and branching naturally become essential to change management as processes age and business needs change. So getting users used to a forgiving iteration cycle can build trust that the product will continue to do this as their needs evolve. And, it can make building easier and more enjoyable.

People learn better by reacting

Why it’s important: Even if a suggestion is wrong, it can be a guide that can help users unstick themselves later, or discover something they wouldn’t have otherwise.

Starting from a blank slate might be a welcoming canvas to advanced users, but it can induce panic in people with less skill or familiarity with the thing they are making. Interventions such as templates can help by giving people something to start with, but these can be too basic or not customizable enough to someone’s unique work experience depending on the complexity of what someone is trying to make.

Imagine an end state that builders can look at and tweak with a few instructions. This could be a more useful way to provide the value of a template with the flexibility of allowing the user to tweak their end result in real time. Prompt engineering and the process of generating an AI output closer to what you want is actually a good example of how an end state helps someone approach their goal. However, most prompt engineering UIs don’t encourage or help the user get closer to their desired output.

Here is a personal example. I recently tried to get ChatGPT to talk like Felicity Merriman — an American Girl doll from the Revolutionary War (it’s a long story). Throughout this process, I had to iterate a few times to get ChatGPT to avoid anachronisms, sound less like a helpful assistant, and consistently answer in the first person.

This is a ChatGPT conversation, the text input of which reads: pretend you are felicity merriman from the american girl company. i would like to chat with you as if we were connected by a magical device that could let us talk back in time. ChatGPT: Of course! I’d be delighted to step into the shoes of Felicity Merriman, a character from the American Girl series, and chat with you…What era or time period would you like to discuss or imagine our conversation taking place in? [Edited for length]
I gave ChatGPT too much credit, thinking it might intuit Felicity’s year from its knowledge base. I had to be more explicit, and later tell it to tone down its default assistant persona.

I was able to come up with the edits to my prompt myself, but as the complexity of what we want out of our generative AI partners increases, so too should the scaffolding that helps us narrow in on our objectives for them.

This is another ChatGPT 3.5 image. The text is too long to reproduce here, but in general, it shows an improved version of the AI persona, talking like someone from 1774.
After getting ChatGPT to stop mentioning the century, stop sounding like an assistant, and speak in the first person like an old friend, it took on the persona of Felicity fairly well and handled my roleplaying as a woman in the late 18th century in stride.

As I worked on this article, OpenAI announced its upcoming GPT Builder, which lets people use natural language to create GPT assistants. This type of iterative natural questioning leverages the kind of “yes and” iteration that I found necessary to create my Felicity Merriman GPT, and the kind of iteration builders often find necessary to make useful tools out of makeware. I’ve since used GPT Builder to make other GPTs for myself in this iterative way, including one that helps me find plot holes in my young adult fiction novel.

This is a screen capture from a video of Sam Altman demoing the GPT Builder at OpenAI’s demo day. He stands at a podium with a large screen next to him that shows the GPT Builder.
Sam Altman demos GPT Builder at DemoDay 2023. The kind of questioning that GPT Builder starts with provides something for the builder to react to — no matter how right or wrong. This would have helped unstick me while I noodled over how to iterate on Felicity’s assistant.

I’ve also observed research participants returning time and again to resources like YouTube videos when they get stuck building something difficult. This is because they need to see a similar problem reflected and solved with someone else’s approach. This takes time and motivation to seek out, though, and not all creators will be willing to sink this much effort into using a makeware product. Since this kind of abstraction can be so difficult, especially for users with less skill or experience, it’s important for makeware to provide quicker feedback or proactive suggestions that people can react to or accept, as opposed to forcing them to come up with their end goal on their own.

People’s circumstances don’t always allow for extensive learning

Why it’s important: The asks we make of users with respect to iteration and learning need to be respectful of the time they realistically have to dedicate to learning — but this is easier said than done.

When research participants answer, “I don’t have time”, to a question about “why” they didn’t do something, I find that there is usually a more specific answer underneath it.

Rather, “I don’t have time” becomes:

“This is too hard to figure out in the 20 minutes that I have every week to dedicate to evolving my business process.”

“Our new director is trying to bring on the software they used in their last job (instead of yours), so I haven’t felt like it’s worth it to even try [to learn yours].”

Of course, some external factors aren’t in the domain of things a company can solve. But understanding the window of time that people have available to use your makeware product is important. If a builder typically only spends up to 5 minutes in the tool, how can the team make this 5-minute session the most valuable? What’s a meaningful unit of work they can get done so that they leave feeling satisfied and not frustrated?

At Airtable, the new user experience team experimented with a checklist-model experience that picked up where newly signed-up users left off in their building session. The idea here was to give new users discrete steps they could take to familiarize themselves with Airtable. We made the experience easily recallable and the steps quick enough to be completed in even just one session, if they liked.

An Airtable base with a new user experience modal drawer open on the bottom right. In the modal, there are several steps that users can take to get started with their new base, such as “Create a table” and “Set up the columns”.
Airtable’s first-time base building feature is intended as a reference to help builders along with steps in their base building process over multiple sessions.

Success for us meant engaging self-serve builders further in the building process, which this release accomplished. But the process of building websites, apps, tools, and workflows can take days, weeks, or months depending on the size of the team or the complexity of the thing being made. Other research that I led suggested that certain audiences were less motivated to sink time into the difficulties of using Airtable than others, especially given their often difficult building objectives.

A challenge for product teams is to realistically answer the question of how much effort a target user group is willing to put into learning the makeware, especially if their skill and familiarity are low. Using that effort as a “reality filter”, teams can then test solutions that can help bridge that gap.

Another challenge that time-poor builders face is the amount of troubleshooting they may have to do to get the thing they’re building that last 30, 40, or 50% of the way to the finish line. I used to refer to this at Airtable as the customization wall, or the long tail of customization. Untangling the knots that builders run into as they continue to build, especially after an onboarding experience that makes it relatively easy to get started, can be a jarring experience for them. Fortunately, the kind of iterative questioning and answering that generative assistants are good at makes me hopeful that they can be of great value during this troubleshooting phase of building with makeware products.

As a real-life example of a product that’s headed in this direction, I tried out Coda’s new AI assistant. It’s a great first start for brainstorming, especially with text as an input source. When I tried to make a demo UXR repository in it, I got excited that the AI assistant would be able to help me figure out how to link Projects to Assignees. Unfortunately, it’s not quite able to do this yet from natural language alone; or at least, I wasn’t able to get it to work the way I intended it to. Still, the promise of what these assistants can do to help unstick builders with tough concepts is certainly there and growing.

This is an image of Coda.io, a knowledge management product. Here I have two tables in Coda that I’m trying to link together by asking an AI assistant in the sidebar to link them for me. It’s not able to do it yet, or at least, I wasn’t able to figure it out.
Coda is on an optimistic path to leveraging AI assistants to help with the customization wall.

Product teams can address these issues by prioritizing experiences that respect the realistic time a user will spend in their product, and by leveraging generative AI to help unstick creators in moments of confusion. These interventions and others can ease the time burden on creators using makeware products.

People are rooted in software and interactions they know

Why it’s important: Focusing on how your product is different might help sell your user at the “shelf decision”, aka when they’re deciding whether or not they should choose your product or a competitor’s, but it won’t help the people who have to use it relate it to their past experiences.

Our brains are associative machines. A friend who used to struggle with conversations told me about a breakthrough moment when he realized that having one is “just saying something that’s related to what someone else just said”. So it shouldn’t be surprising when someone uses a makeware product and says, “oh, this acts like X but also kind of acts like Y” in order to understand it or set an expectation about how it works.

However, in their effort to differentiate themselves in a market that’s increasingly prioritizing tool consolidation, products sometimes fight this comparison in favor of touting their differentiation. But that can sometimes be to the detriment of the people using it.

I experienced this firsthand as a user of digital audio workstation (DAW), Logic Pro, transitioning to Ableton Live (another DAW). One of Ableton’s differentiating factors is its view that lets artists cue clips of their music in real time, allowing them to jam on the fly. This is a super fun and useful way to make music, but it was tough to learn coming from a mental model where I thought about a song’s lifecycle as moving from beginning to end.

This is an image of Ableton Live’s UI, a digital audio workstation. It shows many different instrument columns with scenes inside each. These scenes play audio clips or loops.
Ableton’s session view. Clips can be set to loop or play once, allowing the user to build a song unconventionally.

Compare that with Apple’s digital audio workstation, Logic, which only has the more traditional “timeline”-like view of making music.

This is an image of Logic Pro, Apple’s digital audio workstation. It has instruments as rows, with their audio regions stretched out over time. This is the more traditional way of showing audio regions.
In contrast, Logic’s timeline view is the most conventional way of viewing a song in this kind of software. While Ableton also offers a view like this, its session view was one of the reasons I chose to switch music building softwares.

In an effort to wrap my head around how Ableton worked, I kept trying to graft Logic Pro affordances onto it, struggling with Ableton’s unique and more flexible session view (cueing and playing sections of your song).

If Ableton had provided an onboarding that explained it in the context of someone coming from a more traditional digital audio workstation, I might have had an easier time migrating to it from Logic Pro. The same kind of opportunity applies to enterprise software too, especially those tools that look like Excel or Sheets but work differently under the hood, such as Airtable, Smartsheet, or Coda.

What can makeware products do to make creation easier, faster, and more attainable?

In the course of my career, I’ve observed that people need to make mistakes to learn, and they need options to react to that can help them learn. However, people have other priorities than learning your software and are rooted in software and interaction patterns that they know already.

I’d be a poor research partner if I didn’t leave this discussion with some recommendations for product teams making these kinds of tools. I believe makeware companies can help their builders by making makeware easier to use and providing building blocks that lower the perceived complexity of what they are trying to make. I believe there is a world where makeware tools are cheaper to play with and learn from, with better multi-session support to help people pick up where they left off, and a focus on providing examples and options rather than forcing people to come up with something for the sometimes abstract decisions they are trying to make.

Here are a few ideas I have that could improve the building process with makeware tools:

  • Onboarding that uses the terminology of similar experiences, where applicable
  • Multi-session help experiences
  • In-product AI that provides building options within the conversation
  • Better “undo” and “redo” system status
  • Proactive redirection after undo
  • Branching so people can play with options non-destructively

By making people’s difficult building goals easier to attain, and by improving the usability of our own products, we can empower more people to be successful using our makeware. That way, more people can build apps, software, workflows, websites, and even songs, than could before.

About the author: Caitlin Pequignot is a senior user experience researcher and strategist. Previously a UXR at Airtable, she has worked on research projects for clients such as Google, Snap, and Field Museum. She has also been a professional violinist for 14 years, performing with groups such as the Orlando Philharmoinc Orchestra, Alterity Chamber Orchestra, and the SF Philharmonic. Please visit her website to learn more about her and her work, or to get in touch.

--

--

Caitlin Pequignot

Senior UXR and product strategist, ex-Airtable. Professional violinist and short prose poem enthusiast. caitlinpequignot.com