Digital – or virtual – humans are everywhere. They are among the major trends coming up in 2020 and are sure to decisively reshape human-machine interactions with the help of emerging and booming technologies like AI and real-time graphics.
Industry specialist Mike Seymour held a Ted Talk in 2018 in which he envisioned a future where conversational human interactions were at the heart of technology. This shift to more natural and intuitive interfaces would not only be a welcome improvement but also a necessity for many elderly, disabled, or struggling members of a society in dire need of human contact.
“Faces are brilliant communicators of emotion, and emotion is powerful. I want to give the world a better face by putting a face on technology. But we have to be aware that faces can be very powerful, influential, emotional and persuasive.”Mike Seymour
Consequently, a lot of hype is forming up in the tech and startup world, but with hype comes confusion: buzzwords are thrown around unwittingly as onlookers mix digital humans with virtual assistants, virtual influencers with deepfakes or even robots. So what are the differences between concepts like virtual humans, digital humans or digital doubles, and can we establish a clear map of what they mean where they stand?
Subscribe to our monthly newsletter to receive key information about digital humans, virtual humans and virtual beings in general!
Table of Contents
- Digital humans
- Virtual humans
- Digital doubles
- Closing words: what are we to make of this mess
- What is Virtuals?
What’s a digital human?
A digital human is, in short, a photorealistic 3D human model. If you are not familiar with CGI, 3D models are like puppets in digital format that you often see in games or movies. To be more precise, a digital human is a complex 3D human model which takes advantage of recently developed high-end features to produce realistic results in terms of appearance (skin shading or hair grooming) and movement (accurate rigging and animation).
We consider digital humans to be photorealistic by definition: what differentiates them from the simple, low poly human models that you can find in any asset store for a couple of bucks is the ambition to be as realistic as possible. This is achieved by using state of the art features like advanced shading or SSS (Subsurface Scattering), which is a simulation of the behaviour of light rays when they penetrate our translucent skin. 3D artists and TDs like Ian Spriggs (see our cover), who can achieve striking lifelike results, are the pioneers of digital human creation.
Aren’t all 3D human models “digital humans”?
Why shouldn’t we call any human 3D model a digital human? First of all, “digital human” is a new trend that is closely related to breakthroughs in photorealism while human 3D models have been around for quite a while. In fact, the first human face in 3D appeared in the 1976 movie Futureworld, while the buzz of digital humans started gaining traction in 2015 thanks to the Digital Human League, a group of industry experts.
Before 2010 – and especially outside of VFX – most human models weren’t realistic, either by design or by necessity. Animation movies often featured stylized, cartoonish characters while real-time graphics had to use low-rez assets due to hardware limitations. Should we classify both as digital humans, the concept would only be a synonymous to “3D humans” and we would miss the point of why this is a new, innovative trend in the first place.
Digital humans as state of the art, structured 3D models
Another important point of distinction is that a digital human is a structured 3D model, which excludes deepfakes or raw scans. To those unfamiliar with 3D pipelines, “structured” means that its data has been organized and that it has gone through certain steps that make it “production ready”. On the contrary, a deepfake or a raw scan is unstructured in the sense that users do not have full control over the 3D object.
A digital human has always been through a production pipeline during which 3D artists take care of the retopology, texturing and rigging to ensure that it can be used in production. An unstructured scan can’t be of much use other than being displayed as it is, although 4D capture might change this fact. A UK-based scanning company, Infinite-Realities, is in fact working on a project called AeonX that we will make sure to closely follow.
Virtual humans, digital humans… is there even a difference?
Both terms are often used interchangeably. As trends they are fairly new, though the idea of a human in digital form had already gained some popularity back in the 1980s through sci-fi. A plethora of terms has been coined since then: virtual actors, synthespians, digital clones… which is why we feel the need to offer a definition of our own to clear things up. However, agreeing with the terms that we use doesn’t matter as much as understanding the differences between the concepts that we’re describing.
If a digital human is, to summarize, a photorealistic 3D model, a virtual human would be more akin to a human itself. The term “virtual” after all means that this human is almost as real as you and I; it takes into account the occupation, personality and story of said human. A digital human is a complex, high-end, expensive 3D asset, whereas a virtual human is the assistant, the actor, the influencer, in short a digital human with a job. One could argue that not all virtual humans are digital humans – some might be stylized or cartoon characters – but those
The challenge of integration
This distinction is crucial when considering the difficulty in taking a 3D asset, no matter its quality, and turning it into a living being. In other words, what I’m proposing here is to make a clear distinction between a mere 3D model and a fully fledged use case. The former is an asset that, by itself, doesn’t do anything. The latter is the same asset integrated in a software, made alive and given a purpose by a complex mixture of technical proficiency, interactive storytelling or business acumen.
Virtual humans can be anything from virtual assistants, concierges or virtual influencers. Their purpose could be to serve as your IT desk engineer, as your first HR point of contact or as a character whose adventures you follow on Instagram. But most of all, they’re integrated, whether it’s in a software, on social media or in stories. And integration is a pretty big deal when considering the level of complexity of a high-end 3D human model.
Virtual humans and AI
Naturally, virtual humans are closely tied to artificial intelligence, and all companies that claim to create their own also claim to have some level of AI expertise. Some even consider AI to be their main activity: notable mentions include Soul Machines, Uneeq or IPSoft for their virtual assistant Amelia. On the other hand, virtual agencies who operate on social media like Brud or Diigitals don’t make a direct use of artificial artificial intelligence outside of their general storytelling.
However, even companies who do not have yet the need for artificial intelligence will most likely make use of it in the future. In AI lies the key to scalability, and scalability is what can transform a trend into a booming industry. There’s still a long way to go, especially when we take into consideration the need for emotional animations and responses; even if we’re able to have virtual humans perform certain tasks, they still lack the distinctive emotional expressiveness of humans.
Faithfully replicating a public figure
Digital or virtual humans usually have their own identity. A digital double, on the contrary, is the replica of a real human, more often than not a celebrity. The idea is not to create a random agent, or to design a human from scratch, but rather to reproduce as faithfully as possible the appearance and expressions of a recognizable public figure. Naturally, the context and legal implications of such productions are a little different.
The distinction between digital doubles and random digital humans may sometimes be blurred when the former are created from unmodified 3D scans of unknown capture subjects. This occurrence is more common than one might think, as many known characters are based off uncredited real people who – sometimes unwittingly – sold their own image for a small check. Yet we shouldn’t consider these cases as digital doubles, since the identity of the original subject is “erased” in the process.
Digital doubles in VFX and games
Digital doubles appear in VFX for the most part. They’re useful in variety of cases: face replacements, digital stunt doubles, hybridizations or extreme alterations like the aging and de-aging effects of Benjamin Button and the Irishman. The replica of the replicant Rachel in Blade Runner 2049 by MPC is an interesting example: not only was actress Sean Young already well in her fifties, but the plot required Rachel to be identical to her Blade Runner appearance. The team captured a 3D scan of Sean Young in order to reproduce an anatomically accurate skull, then fed reference footage starring Rachel to a skilled team of modelers who worked relentlessly to produce a faithful double.
The VFX industry is not the only one to dabble in digital doubles. They crossed over to video games and real-time tech as it gradually became powerful enough to allow the reproduction of celebrities. We had already seen athletes in sports games for quite a while, but their appearance was hardly faithful or realistic due to hardware limitations. The industry has come a long way since, prominently featuring stars like Norman Reedus (in the latest Hideo Kojima production, Death Streanding) or Mark Hamill, Gillian Anderson and Henry Cavill (all starring in the extravagant Star Citizen solo campaign, Squadron 42).
The legal and ethical intricacies of digital doubles
The difference between a simple digital human and a digital double holds significant legal implications. As far as digital humans are concerned, image rights lie in a grey area that often follows procedures laid out by modeling activities. This could prove controversial as a 3D model is very different from a still picture: the possibilities offered by a complete replica of someone are virtually endless. When a company creates a digital human from scratch, it can therefore dodge any tricky legal questions, whereas a digital double inevitably faces these concerns.
Speaking of legal grey areas, what about ethical issues? Digital doubles ask a number of interesting questions in this regard. Some are used to bring passed artists like Tupac back from the grave, performing in front of crowds who may have been born after his own death. Actors are resurrected with promises of eternal careers, first as one-shot opportunities (think about the appearance of the late Peter Cushing in Star Wars: Rogue One) but soon as permanent options. Will James Dean soon steal Ryan Gosling’s job?
Deepfakes = digital humans?
Wait, aren’t those a complete different thing? Whether deepfakes should or shouldn’t be on this list is debatable. My take is that they should, if only because many people make the association – or confusion – between digital humans and deepfakes, however different they may be. This mixup stems from the fact that both evolve in the hyped up, buzzing and poorly understood realm of artificial intelligence, and that they loosely achieve the same thing, kind of, well not really but… in a way, they both create controllable “fake people”.
A key difference, however, is that digital humans are 3D models, meaning that they are packages of structured data, while deepfakes are the results of neural networks with little control over the outcome. Moreover, deepfakes are images, results in 2D format, whereas digital humans are a collection of 3D objects placed within a scene and connected to various software and hardware that run it. This means that the range of movement and the potential that can be achieved by a digital human greatly surpasses that of a deepfake – for now.
What digital humans and deepfakes can and cannot do
A fully rigged digital human is a complete simulation of a human being’s expressions, micro expressions and anatomically correct movements. It’s not meant to be a fake approximation of the end result in 2D, it’s a complete simulation which can go as far as replicating the presence of a skull, muscles, particular skin areas, fat pads and anatomically correct joints. This accuracy is the key to capturing the fine details of human expressiveness. On one hand, a deepfake can only perform what it has been trained to perform and often delivers results that are “close enough”, impressive in the context of a viral video but unusable in feature films; on the other hand, a digital human can do pretty much anything. It can roll its eyes, smirk, laugh, curse you then turn around and run away in fear of retaliation.
To summarize, a digital human is a production-ready asset, a puppet using state of the art tech that can be used for pretty much anything, while a deepfake is the result of an algorithm that will only output what it has been trained for, frame by frame. Additionally, a digital human can make use of physically based rendering and tech like Unreal Engine’s real-time hair simulation. Drop one in a dimly lit night scene and watch as realistic shadows form on its face, countered by the warm lighting of a nearby camp fire!
Deepfakes vs 3D models: pros and cons
Don’t get me wrong, this doesn’t mean that deepfakes are useless or that they are inferior to 3D models. In fact, I would argue that deepfakes are, at their core, much more promising than what we’ve been using so far. Their use is limited and their quality is often lacking, but the results achieved are very exciting and sometimes downright frightening. Recently, California-based company Pinscreen unveiled a curious exhibition at the 2020 edition of the World Economic Forum: a screen and camera which were able to roughly transform anyone into a series of preselected celebrities, in real-time.
The potential impact of deepfakes is very real and their potential is immense, especially if the output can one day match the realism of 3D models. We’re just not quite there yet, but the deepfakes boom only started about four years ago after all (although research goes back to the 90s). We can even imagine that one day a similar technology will substitute itself to 3D modeling. After all, what matters in the world of computer graphics is the results, not how they were achieved. The key takeaway here is not that deepfakes are inferior, but rather that they’re not the same thing as digital humans, though they might someday replace them.
Closing words: what are we to make of this mess
Digital human tech is fascinating. We’re talking about real innovation, and real innovation is messy, it confuses people, it thrives on wild, ambitious statements and yet requires a fine level of understanding to weed out buzzwords and vaporware. Defining the concepts produced in an innovative field is challenging and might even be a vain attempt at clarifying a situation that is still moving at a very fast pace.
This effort is fruitful nonetheless, because the words themselves don’t matter as much as the network of concepts they represent. The only definitive conclusion here is that digital humans aren’t one big homogeneous group, and that different projects evolving within the digital human sphere may encounter very different challenges from one another.
The history of virtual humans is still in the making, and we’re glad to be here to watch it unfold.
And if you have a question or would like to talk about virtual experiences, don’t hesitate to contact us through the Virtuals website.
Until next time!