Machine Generated Anchor People and Artificial Skies
AI has been making inroads in the videography and photography arenas. In both cases, media players and software developers are actively…
AI has been making inroads in the videography and photography arenas. In both cases, media players and software developers are actively working to create machine-generated videos and images that will be accepted by the public.
Pure-play AI video programs have been attempted on several occasions, most recently by Xinhua, a Chinese state-run media company, and Sogou, a Beijing-based search engine. The two companies debuted Chinese and English speaking AI news anchors at this autumn’s World Internet Conference.
The bots’ movements are based on the gestures and facial expressions of real actors. However, the stilted-speaking awkwardly-moving AI bots were deemed creepy by recipients, a clear indicator that the companies had not yet created a minimum viable product (MVP).
Once the technology matures, you can expect media companies to deploy AI video characters. They already use writing tools like WordSmith to generate thousands of automated stories every year.
Fake Video
Perhaps more disturbing on the video front have been modestly successful simulations of former U.S. President Barack Obama. These attempts by the University of Washington using Adobe After Effects and the AI face-swapping tool FakeApp are uncanny. However, they seem a tad unreal when you watch fake Obama’s mouth movements.
Best known as deepfakes, these videos use AI to swap the face of a known person over that of body actor. Deepfake technology is often used in pornography to superimpose a celebrity face on that of a body double.
Deepfake videos and images have been deemed unethical on large swaths of the Internet, but that doesn’t mean they are going away. The trend forces a new issue of verification and analysis to identify when a video has been altered by a machine.
In November of 2018, verification came into play when the Trump White House shared an altered video of Jim Acosta avoiding a White House intern during a contentious press conference. The modified video made it seem like Acosta had violently chopped at the intern, who was trying to remove his microphone forcefully. The White House used the fake video to justify revoking Jim Acosta’s press credentials.
In the future blockchain certification may be used not only to verify the video’s authenticity but who created it. The value of a trustworthy content creator will continue to increase as altered videos become more common.
The Best Use of Video AI?
However, the most significant use of video AI may not be in the actual creation of content, but assisting in shooting and developing it. Camera companies have embedded incredible eye and face recognition technology in their sensors and operating systems. The resulting videos and photographs are much more likely to be sharp.
Artificial intelligence is already in use to turn text to audio and assist music recording. It’s not hard to envision basic cameras and phones using AI to better lock in on audio signals, too.
The real benefits of video AI are on the editing side with Adobe video apps like Premiere Pro and After Effects. AI in these tools powered by a machine-learning program called Sensei match colors between multiple video shoots, optimize sound, animate characters, and position title sequences and transitions.
These various tools help video producers speed up the process of developing and editing optimized videos. Given how expensive video production can be, the promise of more and cheaper high-quality video will be of great comfort to marketers.
Artificial Skies
Similar technologies are being used to enhance photographs. Today’s cameras do more than ever to help their photographer owners take better pictures. The already mentioned algorithmic tools embedded on cameras lock onto faces and eyes, correct color and reduce noise.
New phones like the Google Pixel III also have AI embedded on them to help today’s Instagram user. Google’s Night Sight feature takes multiple exposures and aligns them in one image using AI to correct white balance and remove ghosting and noise. The end result is a perfectly exposed image taken in night light, a practice known as bracketing in the professional photography world.
If photography was as simple as point and click with an AI-enhanced camera, everyone’s Instagram feed would be incredible, and all professional photographers would be out of business. Beyond just learning how to compose and light an image, photographers rely heavily on a whole variety of digital editing tools to develop their photos. These tools vary from consumer-grade mobile apps to professional grade tools meant to develop RAW unprocessed images.
Mobile phone apps like Google’s Snapseed analyze the photo and offer specific corrections based on pre-programmed filters and algorithms. These AI inspired corrections are based on what the program believes is an ideal exposure or interpretation of that filtered look. Users can adapt the algorithmic interpretation to their taste, manually correcting photos. Many of my photographer friends swear by Snapseed as the best mobile phone app.
On the desktop side, programs from Adobe, DxO, Skylum and more use AI to assist photographers and retouchers in achieving desired looks. From auto-sharpening tools to professional grade photo merge and layering technologies, these programs use algorithms to perform mini visual miracles that used to take photographers score of minutes.
In some cases, artificial intelligence algorithms analyze data either manually selected by the customer or the overall data encompassed in an image file. ike Programs like Adobe’s Content-Aware Fill experience powered by Sensei and Skylum’s Luminar AI Sky enhancer then optimize images to create desired looks.
Some of the looks are as simple as adding a blur, and others are filters imported by the photographer. Using opacity sliders, gradient, brushes and more, photographers optimize the machine’s interpretation to match their personal vision for the image.
Editing algorithms don’t reinvent the photographs. Instead, they execute tasks that the photographer could (depending on their skillset) perform themselves. However, editing photos to achieve desired looks would take them much more time using conventional Photoshop tools like dodging and burning and using layers.
I operate a photography business in addition to my marketing business. When Skylum began marketing its sky enhancing AI, I was skeptical. But because I was in the middle of writing this article, I decided to download and try it. The algorithm did a great job of separating the sky from the rest of the picture, but then primarily reduces highlights while increasing shadows, very much like a pseudo-HDR edit would.
In all, Skylum’s AI probably saved me 15 minutes of time per image, and the resulting photos performed the same as my traditional landscapes. Was it worth $40 after discounts? Yes, photography AI saved enough time to make it worthwhile.