This is Optimizer, a weekly newsletter sent every Friday from Verge senior reviewer Victoria Song that dissects and discusses the latest phones, smartwatches, apps, and other gizmos that swear they’re going to change your life. Optimizer arrives in our subscribers’ inboxes at 10AM ET. Opt in for Optimizer here.
This time last year, I’d cut 16 minutes off my four-mile run time, was lifting three to four times a week, and had lost 10 pounds after a consistent six months of training. I felt amazing. Then life happened.
A year later, I haven’t run more than a 5K in three months, I gained back those 10 pounds from stress, and have been beset by injuries, illnesses, and other health concerns. Much of this was due to factors outside my control. Frustrated, a month ago I decided to lock in while testing three fitness AI coaches and plans: Fitbit’s AI health coach, Peloton IQ, and Runna. I’d try them out while training for a 5K race to see if I could improve my time, which had slipped over the last year from 31 minutes to 38 to 40 minutes.
In short, I ran that 5K race last week. I improved my time by a whole five minutes. After I told all three AI to take a hike.
I’m not universally against AI coaching in health and fitness apps. The data slog is often overwhelming. As a lifelong overachiever, it’s a constant journey to recognize and accept my limits. Having an intelligent guide to check me when I’m being unrealistic or falling into a negative mindset is — in theory — a good idea. The reality, however, is never quite that simple.
The pitch with these coaches is that they can demystify training and personalize it to your individual circumstances. When you set these trainers up, you tell the AI a certain goal of yours — to lose weight, improve fitness, run a distance within a certain time, or some other variation of that. With chatbots like Fitbit’s AI coach, you can tell it other details, like “I’m starting new medications” or “I’m prone to shin splints and have access to a Peloton Bike.” Theoretically, this helps the AI better customize its recommendations.
For instance, Fitbit’s AI suggested that since I was coming back from a two-week illness, I should incorporate gentle bike rides, walks, and steady-state Zone 2 runs to ease back into things. It generated a program with three workouts per week. Not bad.
Peloton IQ, however, was a bit more loosey-goosey. I had to do three workouts to unlock AI insights, and in the meantime, suggestions were based on previous workout history from four years ago. Meanwhile, Runna’s AI-generated plans were more or less based on a survey. After certain workouts based on pace, it might adjust your targets or give advice. If you’re sick, traveling, or injured, it’s on you to tell the app.
First, these AI features don’t hold you accountable. It’s so easy to fib your way into extended rest. If you want tough love, you’ve got to tell the AI that’s what you want. Even then, you can always disable that in settings if the AI gets on your nerves. Say you’re feeling a smidge tired but could still exercise. You tell the AI, “I’m tired today.” What you might need to hear is, “Just get out the door, see how you feel, and quit early if needed.” Instead, you’ll likely hear, “Oh, that’s okay, be gentle with yourself and take an extra rest day!”
I consider myself disciplined, but I’m human. On my bad days, I’ve manipulated Fitbit’s AI into letting me off the hook. It’s never called me on my bullshit. With Runna, it’s so easy to quit a program and start over. I could ignore Peloton IQ’s weightlifting suggestions if I didn’t want to suffer through sore muscles the next day. There are no consequences besides a guilty conscience. That, too, can be rationalized away. Sometimes, the thought of explaining myself to an AI coach felt like so much work, it became more attractive to avoid it altogether.

This is much harder to do with a human coach, doctor, running group, or accountability buddy. Whenever I’m in a rut, I can count on my spouse to stare at me and say, “You know you always feel better after a run. Just go for 15 minutes.” If I want to skip a race in freezing weather, my bestie will remind me why I signed up for it. I don’t want to be scolded at my next doctor’s appointment. Yes, this causes some anxiety. But I don’t want to let these people down. That motivates me to show up for myself. AI might be able to read my metrics, but it isn’t wise enough to discern when I might psychologically need a push or a break.
Another problem is one I’ve written about before: the obviousness of the advice.
That’s fine if you’re a beginner. At the start of a fitness journey, any information is helpful. But when you’ve been at it for a while, it’s typically repackaging things you already know. Runna’s insights on my pace were that I tend to be inconsistent. I start too fast and it leaves me tired by the end. I’ve known this for the past 10 years. Fitbit’s AI often told me to aim for eight hours of sleep per night and try bedtime routines. I’ve known this since childhood. Peloton IQ was occasionally helpful with strength training form, but that’s about it.
Plus, AI frequently needs handholding. Fitbit told me that because of the cold weather, I should stick to treadmill runs. I don’t have a treadmill, and I hate running on them. I’d much rather run outside in the cold or replace runs with indoor bike rides if the temperature is under 30 degrees Fahrenheit. It acknowledged my preference and then scheduled another treadmill run.
After two weeks of alternating between bullying AI and following its recommendations, I ran my annual 5K on Thanksgiving. My legs felt heavy. My tunes were doing nothing for me, and I was preoccupied with how I was doing. There was about a week and a half before my race, and this would be somewhat of a benchmark. Every 30 seconds, it felt like I was being interrupted by an AI voice telling me I was behind or ahead of pace.
I had to take three walk breaks, I got a cramp halfway through, and hated every second of the run. I posted a miserable (for me) time of 41 minutes.
My postmortem AI analysis of the run made my head swim. Altogether, the three coaches’ insights could be summed up as: Well, nothing in your metrics suggests you weren’t well rested. Did you eat beforehand? Remember, you want to fuel before a run. Plus, you were inconsistent in pace. Try to conserve energy early on so you can finish strong. Sleeping seven to eight hours a night can also help! Did you want to factor in your medication side effects? Want to rearrange your lifting schedule to effectively taper for your race?
That’s about when I realized I was deferring to AI when I should be trusting my gut. My gut told me I was overwhelmed by all the data. I was spending so much time coaching various AI tools on how to coach me that I’d begun dreading my workouts. I deleted my Runna plan. I took off my Fitbit and hit pause on testing. And while I still used Peloton for classes, I ignored the AI features. I readjusted my mindset from improving my 5K time to simply enjoying the race day energy.
On race day, I barely looked at my watch the entire time. I had no idea what my splits were, but I was proud that I didn’t need to take any walk breaks despite the hillier course. According to my Apple Watch, I finished in 36 minutes. That was five minutes faster than my Turkey Trot, and with a quicker average pace than all the other runs during my AI fitness testing. It wasn’t a perfect run, but it felt like a good run.
That’s the thing about improving your health. A large portion of it is a mental battle between who you were and who you want to be. AI is incapable of being truly invested in that journey because it doesn’t actually know you. In the end, it’s still on you to know what’s best for yourself. Sometimes, that’s telling AI to shut the hell up.


