Google’s Clips camera is a tiny sliver of a camera, the size of two Wheat Thins crackers. You can set it down anywhere or clip it to anything. Once you turn it on, you don’t have to press a button or use a self-timer to take pictures. The camera decides when to snap, based on Google’s artificial-intelligence algorithms.
The Clips’s heart is in the right place. It solves some real problems for its target audience, which is parents (of kids or of pets).
First, if you’re in that category, you’re probably never in any of your own photographs, because you’re always behind the camera. Second, babies and young children often stop whatever cute thing they’re doing the moment you pull out your phone. They get distracted by it or feel self-conscious. But the Clips avoids that problem because it’s unobtrusive and because you’re not holding it between your face and the kid’s.
Truth is, I suspect the Clips will probably flop. The camera isn’t very impressive next to those in some smartphones, and $250 is a steep price for a one-trick pony. But its central idea—AI as photographer—is fascinating.
AI isn’t organic. It has to be programmed—taught or coded by engineers. In other words, the AI doesn’t ultimately decide what makes a good picture; its programmers, informed by photography experts, do.
Some of the AI’s decision making in the Clips is obvious. It looks for scenes of activity. It favors familiar subjects—people whose faces it sees most often. It avoids capturing an image when something is blocking the lens, like your fingers or your grabby baby’s hands. It prefers good lighting. It takes its best shots three to eight feet away.
But here’s where things get more complicated: The camera is also designed to wait for happy facial expressions. It tends not to capture anybody who is sad, angry, sleepy, bored or crying.
That AI rule, unfortunately, rules out a lot of great picture taking. Let’s face it—a young child’s life is full of micro tragicomic moments that might be worth recording, even if they produce brief bursts of tears. You know: His ice cream falls off the cone onto the floor. A puppy licks her face a little too energetically. A well-meaning clown scares him.
Google is aware of the problem and plans to add a new preference setting—not a check box called “Include Misery” but an option that makes the camera watch for changes in facial expression. In the meantime, the Clips’s preference for joyous moments tends to exaggerate two happiness filters we already put on our lives.
First, we already self-edit our video and photographic memories simply by choosing what to shoot. Most people, most of the time, record high points such as celebrations and travel. Your collection probably contains very few pics of you fighting with your spouse, depressed by your job or in pain from an injury.
Second, we further curate our recordings by choosing which to post online. At this point, we don’t just risk deceiving ourselves about the overall happiness balance in our lives; we’re explicitly trying to paint a picture of a wonderful life for our followers. We become brand ambassadors for our supposedly flawless lives.
Studies have shown that the result of all this happy filtering can sadden other people on social media, who develop “Facebook envy.”
You begin to wonder why we take pictures and videos in the first place. What’s the purpose of those acts? Is it to create a faithful record of our lives, both high and low moments? Is there anything wrong with immortalizing only the bright spots, permitting the darker stuff to fade out of view—and maybe out of memory?
Answering those questions depends, in part, on who your audience is. An older you? Your descendants? Your Facebook friends?
There’s no right answer. We all take and curate pictures—or don’t—for different reasons. If Google’s Clips camera achieves nothing more than throwing those questions into sharper focus, its invention won’t have been in vain.