5 Mind-Bending Experiments That Show Where Creativity Is Headed Next

0

Researchers showed off new tools to augment human creativity at Siggraph, the year’s biggest computer graphics conference.

Every year, the Association for Computing Machinery hosts a conference called Siggraph where artists, researchers, designers, and developers show off their work on computer graphics. Since the 1970s, the conference has offered an annual glimpse at the latest techniques and tools for creativity in the digital world. This year’s conference–going on this week in Los Angeles–shows how rapidly creative technology is evolving today.

Many of the most fascinating papers at 2017’s conference used computational design and AI-based techniques, like convolutional neural networks and machine learning, to automate tasks that were once done manually. From animation and furniture design to photography and even robotics, these tools could one day augment human skills. If Siggraph 2017 is any indication, machine intelligence will continue to edge steadily into the lives of creative professionals. Here’s how.

HOW ANIMATORS BUILD CHARACTERS

Deep learning isn’t just being used to generate lots of creepy, computer-made faces–it’s also generating creepy 3D models of faces. Researchers from the University of Hong Kong created a program called DeepSketch2Face that converts any line drawing of a face into a 3D model of it. It looks like it could be an animator’s dream, enabling near-effortless creation of models, but right now the researchers have positioned it more for amateurs to use in the creation of cartoons, social media avatars, and caricatures.

Another program presented at Siggraph does something similar without using AI, turning 2D sketches into 3D models of objects like pillows and shoes. It’s difficult to know just how much time these types of programs could save human creatives, but it’s easy to imagine such sketching tools being integrated into many different types of software for widespread use. Assuming you can get past the creep factor.

HOW PHOTOGRAPHERS EDIT THEIR SHOTS

Some of the advances presented at Siggraph could be coming to a cell phone near you. Researchers from Google and MIT’s Computer Science and Artificial Intelligence Laboratory created a machine learning-powered image editing program that works so quickly and efficiently that it can give your photos a professional edit before you’ve even taken them. According to MIT News, the tool can show you what the edited version of your photo would look like while you’re still deciding how to frame the shot. Trained on 5,000 images that were each retouched by five different photographers, and another data set of thousands of pictures retouched using existing image-processing algorithms, the program uses a new machine learning approach that can run on a mobile device while eliminating lag time. If it hits Android, you’d never take another bad picture.

HOW FURNITURE IS DESIGNED

Researchers also showed off computational design tools for furniture making. A team from UC Berkeley, Adobe Research, George Mason University, and Stanford created an interactive tool for one of the most complex elements of furniture design: joinery. Normal handmade joinery, which is part of many craft traditions around the world, is carefully designed so that a chair or table’s pieces interlock. Those joints can be created much faster using this new tool. You just design the surfaces of your model using a program like SketchUp, and then the program turns that into 3D-printable parts that interlock.

HOW ROBOTS ARE ENGINEERED

From Disney Research Zurich and Carnegie Mellon comes another computational design tool that takes rigid robot joints and calculates the optimal design for a flexible joint that has the same action. The program enables researchers to preserve the same motion of a robot’s rigid joints while replacing them with elastic ones, taking into account internal stress and bending. It would certainly come in hand if you’re creating animatronics or other kinds of robots–as one does at Disney.

HOW VIDEO IS SHOT AND EDITED

With advances in machine learning also comes the ability to fabricate video and audio in an incredibly lifelike way, as several projects at Siggraph illustrated. Take “Synthesizing Obama,” in which researchers from the University of Washington were able to train a neural network to generate footage of Obama that looks startlingly real. Using video of Obama’s public addresses, which amounts to 17 hours of video and two million frames, they were able to generate new Obamas that could mime the former president’s words with different facial expressions and backgrounds. In another project presented at the conference (and previously covered on Co.Design) called VoCo, Princeton researchers demonstrated how to edit an audio file by inserting words that weren’t actually spoken. Incredible technology? Yes. But the future of fake news just got a little closer.

This article first appeared in www.fastcodesign.com

Seeking to build and grow your brand using the force of consumer insight, strategic foresight, creative disruption and technology prowess? Talk to us at +9714 3867728 or mail: info@groupisd.com or visit www.groupisd.com

About Author

Katharine Schwab

Katharine Schwab is a contributing writer at Co.Design based in New York. Her work has appeared in The Atlantic, The Seattle Times, and the San Francisco Chronicle.

Comments are closed.