Deepfakes (a portmanteau of "deep learning" and "fake") are synthetic media in which a person in an existing image or video is replaced with someone else's likeness. While the act of faking content is not new, deepfakes leverage powerful techniques from machine learning and artificial intelligence to manipulate or generate visual and audio content with a high potential to deceive. The main machine learning methods used to create deepfakes are based on deep learning and involve training generative neural network architectures, such as autoencoders or generative adversarial networks (GANs).
Deepfakes have garnered widespread attention for their uses in celebrity pornographic videos, revenge porn, fake news, hoaxes, and financial fraud. This has elicited responses from both industry and government to detect and limit their use.
Photo manipulation was developed in the 19th century and soon applied to motion pictures. Technology steadily improved during the 20th century, and more quickly with digital video.
Deepfake technology has been developed by researchers at academic institutions beginning in the 1990s, and later by amateurs in online communities. More recently the methods have been adopted by industry.
Academic research related to deepfakes lies predominantly within the field of computer vision, a subfield of computer science. An early landmark project was the Video Rewrite program, published in 1997, which modified existing video footage of a person speaking to depict that person mouthing the words contained in a different audio track. It was the first system to fully automate this kind of facial reanimation, and it did so using machine learning techniques to make connections between the sounds produced by a video's subject and the shape of the subject's face.
Contemporary academic projects have focused on creating more realistic videos and on improving techniques. The "Synthesizing Obama" program, published in 2017, modifies video footage of former president Barack Obama to depict him mouthing the words contained in a separate audio track. The project lists as a main research contribution its photorealistic technique for synthesizing mouth shapes from audio. The Face2Face program, published in 2016, modifies video footage of a person's face to depict them mimicking the facial expressions of another person in real time. The project lists as a main research contribution the first method for re-enacting facial expressions in real time using a camera that does not capture depth, making it possible for the technique to be performed using common consumer cameras.
In August 2018, researchers at the University of California, Berkeley published a paper introducing a fake dancing app that can create the impression of masterful dancing ability using AI. This project expands the application of deepfakes to the entire body; previous works focused on the head or parts of the face.
Researchers have also shown that deepfakes are expanding into other domains such as tampering medical imagery. In this work, it was shown how an attacker can automatically inject or remove lung cancer in a patient's 3D CT scan. The result was so convincing that it fooled three radiologists and a state-of-the-art lung cancer detection AI. To demonstrate the threat, the authors successfully performed the attack on a hospital in a White hat penetration test.
A survey of deepfakes, published in May 2020, provides a timeline of how the creation and detection deepfakes have advanced over the last few years. The survey identifies that researchers have been focusing on resolving the following challenges of deepfake creation:
- Generalization. High-quality deepfakes are often achieved by training on hours of footage of the target. This challenge is to minimize the amount of training data required to produce quality images and to enable the execution of trained models on new identities (unseen during training).
- Paired Training. Training a supervised model can produce high-quality results, but requires data pairing. This is the process of finding examples of inputs and their desired outputs for the model to learn from. Data pairing is laborious and impractical when training on multiple identities and facial behaviors. Some solutions include self-supervised training (using frames from the same video), the use of unpaired networks such as Cycle-GAN, or the manipulation of network embeddings.
- Identity leakage. This is where the identity of the driver (i.e., the actor controlling the face in a reenactment) is partially transferred to the generated face. Some solutions proposed include attention mechanisms, few-shot learning, disentanglement, boundary conversions, and skip connections.
- Occlusions. When part of the face is obstructed with a hand, hair, glasses, or any other item then artifacts can occur. A common occlusion is a closed mouth which hides the inside of the mouth and the teeth. Some solutions include image segmentation during training and in-painting.
- Temporal coherence. In videos containing deepfakes, artifacts such as flickering and jitter can occur because the network has no context of the preceding frames. Some researchers provide this context or use novel temporal coherence losses to help improve realism. As the technology improves, the interference is diminishing.
Overall, Deepfakes are expected to have several implications in media and society, media production, media representations, media audiences, gender, law, and regulation, and politics.
The term deepfakes originated around the end of 2017 from a Reddit user named "deepfakes". He, as well as others in the Reddit community r/deepfakes, shared deepfakes they created; many videos involved celebrities’ faces swapped onto the bodies of actresses in pornographic videos, while non-pornographic content included many videos with actor Nicolas Cage’s face swapped into various movies.
Other online communities remain, including Reddit communities that do not share pornography, such as r/SFWdeepfakes (short for "safe for work deepfakes"), in which community members share deepfakes depicting celebrities, politicians, and others in non-pornographic scenarios. Other online communities continue to share pornography on platforms that have not banned deepfake pornography.
In January 2018, a proprietary desktop application called FakeApp was launched. This app allows users to easily create and share videos with their faces swapped with each other. As of 2019, FakeApp has been superseded by open-source alternatives such as Faceswap, command line-based DeepFaceLab, and web-based apps such as DeepfakesWeb.com 
Larger companies are also starting to use deepfakes. The mobile app giant Momo created the application Zao which allows users to superimpose their face on television and movie clips with a single picture. The Japanese AI company DataGrid made a full body deepfake that can create a person from scratch. They intend to use these for fashion and apparel.
Audio deepfakes, and AI software capable of detecting deepfakes and cloning human voices after 5 seconds of listening time also exist.
A mobile deepfake app, Impressions, was launched in March 2020. It was the first app for the creation of celebrity deepfake videos from mobile phones.
Deepfakes technology can not only be used to fabricate messages and actions of others, but it can also be used to revive deceased individuals. On 29 October 2020, Kim Kardashian posted a video of her late father Robert Kardashian; the face in the video of Robert Kardashian was created with deepfake technogy. This hologram was created by the company Kaleida, where they use a combination of performance, motion tracking, SFX, VFX and DeepFake technologies in their hologram creation.
There was also an instance where Joaquin Oliver, victim of the Parkland shooting was resurrected with deepfake technology. Oliver's parents teamed up on behalf of their organization Nonprofit Change the Ref, with McCann Health to produce this deepfake video advocating for gun-safety voting campaign. In this deepfake message, it shows Joaquin encouraging viewers to vote.
Deepfake technology used to create facial morphing
Deepfakes rely on a type of neural network called an autoencoder. These consist of an encoder, which reduces an image to a lower dimensional latent space, and a decoder, which reconstructs the image from the latent representation. Deepfakes utilize this architecture by having a universal encoder which encodes a person in to the latent space. The latent representation contains key features about their facial features and body posture. This can then be decoded with a model trained specifically for the target. This means the target's detailed information will be superimposed on the underlying facial and body features of the original video, represented in the latent space.
A popular upgrade to this architecture attaches a generative adversarial network to the decoder. A GAN trains a generator, in this case the decoder, and a discriminator in an adversarial relationship. The generator creates new images from the latent representation of the source material, while the discriminator attempts to determine whether or not the image is generated. This causes the generator to create images that mimic reality extremely well as any defects would be caught by the discriminator. Both algorithms improve constantly in a zero sum game. This makes deepfakes difficult to combat as they are constantly evolving; any time a defect is determined, it can be corrected.
Deepfakes can be used to generate blackmail materials that falsely incriminate a victim. However, since the fakes cannot reliably be distinguished from genuine materials, victims of actual blackmail can now claim that the true artifacts are fakes, granting them plausible deniability. The effect is to void credibility of existing blackmail materials, which erases loyalty to blackmailers and destroys the blackmailer's control. This phenomenon can be termed "blackmail inflation", since it "devalues" real blackmail, rendering it worthless.  It is possible to repurpose commodity cryptocurrency mining hardware with a small software program to generate this blackmail content for any number of subjects in huge quantities, driving up the supply of fake blackmail content limitlessly and in highly scalable fashion. 
Many deepfakes on the internet feature pornography of people, often female celebrities whose likeness is typically used without their consent. Deepfake pornography prominently surfaced on the Internet in 2017, particularly on Reddit. A report published in October 2019 by Dutch cybersecurity startup Deeptrace estimated that 96% of all deepfakes online were pornographic.
The first one that captured attention was the Daisy Ridley deepfake, which was featured in several articles. Other prominent pornographic deepfakes were of various other celebrities. As of October 2019, most of the deepfake subjects on the internet were British and American actresses. However, around a quarter of the subjects are South Korean, the majority of which are K-pop stars.
In June 2019, a downloadable Windows and Linux application called DeepNude was released which used neural networks, specifically generative adversarial networks, to remove clothing from images of women. The app had both a paid and unpaid version, the paid version costing $50. On 27 June the creators removed the application and refunded consumers.
Deepfake video : Vladimir Putin warning Americans on election interference and increasing political divide
Deepfakes have been used to misrepresent well-known politicians in videos.
- In separate videos, the face of the Argentine President Mauricio Macri has been replaced by the face of Adolf Hitler, and Angela Merkel's face has been replaced with Donald Trump's.
- In April 2018, Jordan Peele collaborated with Buzzfeed to create a deepfake of Barack Obama with Peele's voice; it served as a public service announcement to increase awareness of deepfakes.
- In January 2019, Fox affiliate KCPQ aired a deepfake of Trump during his Oval Office address, mocking his appearance and skin color (and subsequently fired an employee found responsible for the video).
- During the 2020 Delhi Legislative Assembly election campaign, the Delhi Bharatiya Janata Party used similar technology to distribute a version of an English-language campaign advertisement by its leader, Manoj Tiwari, translated into Haryanvi to target Haryana voters. A voiceover was provided by an actor, and AI trained using video of Tiwari speeches was used to lip-sync the video to the new voiceover. A party staff member described it as a "positive" use of deepfake technology, which allowed them to "convincingly approach the target audience even if the candidate didn't speak the language of the voter."
- In April 2020, the Belgian branch of Extinction Rebellion published a deepfake video of Belgian Prime Minister Sophie Wilmès on Facebook. The video promoted a possible link between deforestation and COVID-19. It had more than 100,000 views within 24 hours and received many comments. On the Facebook page where the video appeared, many users interpreted the deepfake video as genuine.
In June 2019, the United States House Intelligence Committee held hearings on the potential malicious use of deepfakes to sway elections.
the first AI-powered movie actress: a cyber Ornella Muti of Joseph Ayerle's videoartwork "Un'emozione per sempre 2.0" (2018)
In March 2018 the multidisciplinary artist Joseph Ayerle published the videoartwork Un'emozione per sempre 2.0 (English title: The Italian Game). The artist worked with Deepfake technology to create an AI actress, a synthetic version of 80s movie star Ornella Muti, traveling in time from 1978 to 2018. The Massachusetts Institute of Technology referred this artwork in the study "Creative Wisdom". The artist used Ornella Muti's time travel to explore generational reflections, while also investigating questions about the role of provocation in the world of art. For the technical realization Ayerle used scenes of photo model Kendall Jenner. The program replaced Jenner's face by an AI calculated face of Ornella Muti. As a result, the AI actress has the face of the Italian actress Ornella Muti and the body of Kendall Jenner.
There has been speculation about deepfakes being used for creating digital actors for future films. Digitally constructed/altered humans have already been used in films before, and deepfakes could contribute new developments in the near future. Deepfake technology has already been used by fans to insert faces into existing films, such as the insertion of Harrison Ford's young face onto Han Solo's face in Solo: A Star Wars Story, and techniques similar to those used by deepfakes were used for the acting of Princess Leia in Rogue One.
As deepfake technology increasingly advances, Disney has improved their visual effects using high resolution deepfake face swapping technology. Disney improved their technology through progressive training programmed to identify facial expressions, implementing a face swapping feature, and iterating in order to stabilize and refine the output. This high resolution deepfake technology will be implemented into movie and television production—saving significant operational and production costs. Disney's deepfake generation model can produce AI-generated media at a 1024 x 1024 resolution, which is much greater and produces more realistic results than common models that produce media at a 256 x 256 resolution. In addition, with this technology, Disney has the opportunity to revive dead actors and characters with a quick and simple face swap; Disney can now resurrect and revive characters for fans to enjoy.
In 2020, an internet meme emerged utilizing deepfakes to generate videos of people singing the chorus of "Baka Mitai" (ばかみたい), a song from the game Yakuza 0 in the video game series Yakuza. In the series, the melancholic song is sung by the player in a karaoke minigame. Most iterations of this meme use a 2017 video uploaded by user Dobbsyrules, who lip syncs the song, as a template.
Deepfake photographs can be used to create sockpuppets, non-existent persons, who are active both online and in traditional media. A deepfake photograph appears to have been generated together with a legend for an apparently non-existent person named Oliver Taylor, whose identity was described as a university student in the United Kingdom. The Oliver Taylor persona submitted opinion pieces in several newspapers and was active in online media attacking a British legal academic and his wife, as "terrorist symphathizers." The academic had drawn international attention in 2018 when he commenced a lawsuit in Israel against NSO, a surveillance company, on behalf of people in Mexico who alleged they were victims of NSO's phone hacking technology. Reuters could find only scant records for Oliver Taylor and "his" university had no records for him. Many experts agreed that "his" photo is a deepfake. Several newspapers have not retracted "his" articles or removed them from their websites. It is feared that such techniques are a new battleground in disinformation.
Collections of deepfake photographs of non-existent people on social networks have also been deployed as part of Israeli partisan propaganda. The Facebook page "Zionist Spring" featured photos of non-existent persons along with their "testimonies" purporting to explain why they have abandoned their left-leaning politics to embrace the right-wing, and the page also contained large numbers of posts from Prime Minister of Israel Benjamin Netanyahu and his son and from other Israeli right wing sources. The photographs appear to have been generated by "human image synthesis" technology, computer software that takes data from photos of real people to produce a realistic composite image of a non-existent person. In much of the "testimony," the reason given for embracing the political right was the shock of learning of alleged incitement to violence against the prime minister. Right wing Israeli television broadcasters then broadcast the "testimony" of these non-existent person based on the fact that they were being "shared" online. The broadcasters aired the story, even though the broadcasters could not find such people, explaining "Why does the origin matter?" Other Facebook fake profiles—profiles of fictitious individuals—contained material that allegedly contained such incitement against the right wing prime minister, in response to which the prime minister complained that there was a plot to murder him.
Audio deepfakes have been used as part of social engineering scams, fooling people into thinking they are receiving instructions from a trusted individual. In 2019, a U.K.-based energy firm's CEO was scammed over the phone when he was ordered to transfer €220,000 into a Hungarian bank account by an individual who used audio deepfake technology to impersonate the voice of the firm's parent company's chief executive.
Credibility and authenticity
Though fake photos have long been plentiful, faking motion pictures has been more difficult, and the presence of deepfakes increases the difficulty of classifying videos as genuine or not. AI researcher Alex Champandard has said people should know how fast things can be corrupted with deepfake technology, and that the problem is not a technical one, but rather one to be solved by trust in information and journalism. Deepfakes can be leveraged to defame, impersonate, and spread disinformation. The primary pitfall is that humanity could fall into an age in which it can no longer be determined whether a medium's content corresponds to the truth.
Similarly, computer science associate professor Hao Li of the University of Southern California states that deepfakes created for malicious use, such as fake news, will be even more harmful if nothing is done to spread awareness of deepfake technology. Li predicted that genuine videos and deepfakes would become indistinguishable in as soon as half a year, as of October 2019, due to rapid advancement in artificial intelligence and computer graphics.
The consequences of a deepfake are not significant enough to destabilize the entire government system; however, deepfakes possess the ability to damage individual entities tremendously. This is because deepfakes are often targeted at one individual, and/or their relations to others in hopes to create a narrative powerful enough to influence public opinion or beliefs. This can be done through deepfake voice phishing, which manipulates audio to create fake phone calls or conversations. Another method of deepfake use is fabricated private remarks, which manipulate media to convey individuals voicing damaging comments.
Deepfakes of North Korean leader Kim Jong-un and Russian president Vladimir Putin have also been created by a nonpartisan advocacy group RepresentUs. These deepfakes were meant to air publicly as commercials to relay the notion that interference by these leaders in US elections would be detrimental to the United States' democracy; the commercial also aimed to shock Americans to realize how fragile democracy is, and how media and news can significantly influence the country's path regardless of credibility. However, these commercials did include an ending comment detailing that the footage was not real, and the commercials ultimately did not air due to fears and sensitivity regarding how Americans may react.
A clip from Nancy Pelosi's speech at the Center for American Progress given on 22 May 2019 was slowed down, in addition to the pitch being altered, to make it seem as if she were drunk; however, critics argue that this is not a deepfake.
Donald Trump deepfake
A deepfake of Donald Trump was easily created based on a skit Jimmy Fallon performed on NBC's The Tonight Show. In this skit (aired 4 May 2016), Jimmy Fallon dressed up as Donald Trump and pretended to participate in a phone call with Barack Obama, conversing in a manner that presented him to be bragging about his primary win in Indiana. On 5 May 2019 a deepfake of Donald Trump (taken from this skit) was created. In this deepfake, Jimmy Fallon's face was transformed into Donald Trump's face (audio remained the same). This deepfake video was uploaded to YouTube by the founder of Derpfakes with a comedic intent.
Barack Obama deepfake
American actor Jordan Peele, BuzzFeed, and Monkeypaw Productions created and produced a deepfake of Barack Obama (uploaded to YouTube on 17 Apr 2018) that depicted Barack Obama cursing and calling Donald Trump names. In this deepfake Peele's voice and mouth was transformed and manipulated into Obama's voice and face. The intent for this video was to portray the dangerous consequences and power of deepfakes, and how deepfakes can make anyone say anything.
Potential positive innovations have also emerged alongside the growing popularity and creation of deepfakes. For example, corporate training videos can be created using deepfaked avatars and their voices. An example of this is Synthesia, which uses deepfake technology with avatars to create personalized videos.
Microsoft has developed an app called "Seeing AI" which uses artificial intelligence and deepfake technology to narrate the world around you to aid individuals who are blind and/or have low vision. With deepfake technology, this app can narrate text in documents, scan products and barcodes, recognize people and their emotions, describe the location and setting around you, identify currency and bills, and communicate these features in different languages in a volume and tone that is adjusted to your surroundings.
Social Media Platforms
Twitter is taking active measures to handle synthetic and manipulated media on their platform. In order to prevent disinformation from spreading, Twitter is placing a notice on tweets that contain manipulated media and/or deepfakes that signal to viewers that the media is manipulated. There will also be a warning that appears to users who plan on retweeting, liking, or engaging with the tweet. Twitter will also work to provide users a link next to the tweet containing manipulated or synthetic media that links to a Twitter Moment or credible news article on the related topic—as a debunking action. Twitter also has the ability to remove any tweets containing deepfakes or manipulated media that may pose a harm to users’ safety. In order to better improve Twitter's detection of deepfakes and manipulated media, Twitter has asked users who are interesting in partnering with them to work on deepfake detection solutions to fill out a form (that is due 27 November 2020).
Facebook has taken efforts towards encouraging the creation of deepfakes in order to develop state of the art deepfake detection software. Facebook was the prominent partner in hosting the Deepfake Detection Challenge (DFDC), held December 2019, to 2114 participants who generated more than 35000 models. The top performing models with the highest detection accuracy were analyzed for similarities and differences; these findings are areas of interest in further research to improve and refine deepfake detection models . Facebook has also detailed that the platform will be taking down media generated with artificial intelligence used to alter an individual's speech. However, media that has been edited to alter the order or context of words in one's message would remain on the site but be labeled as false, since it was not generated by artificial intelligence.
Most of the academic research surrounding Deepfake seeks to detect the videos. The most popular technique is to use algorithms similar to the ones used to build the deepfake to detect them. By recognizing patterns in how Deepfakes are created the algorithm is able to pick up subtle inconsistencies. Researchers have developed automatic systems that examine videos for errors such as irregular blinking patterns of lighting. This technique has also been criticized for creating a "Moving Goal post" where anytime the algorithms for detecting get better, so do the Deepfakes. The Deepfake Detection Challenge, hosted by a coalition of leading tech companies, hope to accelerate the technology for identifying manipulated content.
A team at the University of Buffalo published a paper in October 2020 outlining their technique of using reflections of light in the eyes of
those depicted to spot deepfakes with a high rate of success, even without the use of an AI detection tool, at least for the time being.
Other techniques use blockchain to verify the source of the media. Videos will have to be verified through the ledger before they are shown on social media platforms. With this technology, only videos from trusted sources would be approved, decreasing the spread of possibly harmful Deepfake media.
Digitally signing of all video and imagery by cameras and video cameras, including smartphone cameras, was suggested to fight deepfakes. That allows tracing every photograph or video back to its original owner that can be used to pursue dissidents.
Since 2017, Samantha Cole of Vice published a series of articles covering news surrounding deepfake pornography. On 31 January 2018, Gfycat began removing all deepfakes from its site. On Reddit, the r/deepfakes subreddit was banned on 7 February 2018, due to the policy violation of "involuntary pornography". In the same month, representatives from Twitter stated that they would suspend accounts suspected of posting non-consensual deepfake content. Chat site Discord has taken action against deepfakes in the past, and has taken a general stance against deepfakes. In September 2018, Google added "involuntary synthetic pornographic imagery” to its ban list, allowing anyone to request the block of results showing their fake nudes.
In February 2018, Pornhub said that it would ban deepfake videos on its website because it is considered “non consensual content” which violates their terms of service. They also stated previously to Mashable that they will take down content flagged as deepfakes. Writers from Motherboard from Buzzfeed News reported that searching "deepfakes" on Pornhub still returned multiple recent deepfake videos.
Facebook has previously stated that they would not remove deepfakes from their platforms. The videos will instead be flagged as fake by third-parties and then have a lessened priority in user's feeds. This response was prompted in June 2019 after a deepfake featuring a 2016 video of Mark Zuckerberg circulated on Facebook and Instagram.
In the United States, there have been some responses to the problems posed by deepfakes. In 2018, the Malicious Deep Fake Prohibition Act was introduced to the US Senate, and in 2019 the DEEPFAKES Accountability Act was introduced in the House of Representatives. Several states have also introduced legislation regarding deepfakes, including Virginia, Texas, California, and New York. On 3 October 2019, California governor Gavin Newsom signed into law Assembly Bills No. 602 and No. 730. Assembly Bill No. 602 provides individuals targeted by sexually explicit deepfake content made without their consent with a cause of action against the content's creator. Assembly Bill No. 730 prohibits the distribution of malicious deepfake audio or visual media targeting a candidate running for public office within 60 days of their election.
In November 2019 China announced that deepfakes and other synthetically faked footage should bear a clear notice about their fakeness starting in 2020. Failure to comply could be considered a crime the Cyberspace Administration of China stated on its website. The Chinese government seems to be reserving the right to prosecute both users and online video platforms failing to abide by the rules.
In the United Kingdom, producers of deepfake material can be prosecuted for harassment, but there are calls to make deepfake a specific crime; in the United States, where charges as varied as identity theft, cyberstalking, and revenge porn have been pursued, the notion of a more comprehensive statute has also been discussed.
In Canada, the Communications Security Establishment released a report which said that deepfakes could be used to interfere in Canadian politics, particularly to discredit politicians and influence voters. As a result, there are multiple ways for citizens in Canada to deal with deepfakes if they are targeted by them.
Response from DARPA
The Defense Advanced Research Projects Agency (DARPA) has funded a project where individuals will compete to create AI-generated videos, audio, and images as well as automated tools to detect these deepfakes. DARPA has even taken efforts to host a "proposers day" for a project affiliated with the Semantic Forensics Program where researchers are driven to prevent viral spread of AI-manipulated media. DARPA and the Semantic Forensics Program are also working together to detect these AI-manipulated media through efforts focused in training computers to utilize common sense, logical reasoning. DARPA has also created a Media Forensics (MediFor) program, to mitigate the increasing harm that deepfakes and AI-generated media poses. This program aims to not only detect deepfakes, but also provide information regarding how the media was created. Simultaneously, DARPA's goal is to address and emphasize the consequential role of deepfakes and their influence upon decision making.
In popular culture
- The 1986 Mid-December issue of Analog magazine published the novelette "Picaper" by Jack Wodhams. Its plot revolves around digitally enhanced or digitally generated videos produced by skilled hackers serving unscrupulous lawyers and political figures.
- The 1987 film The Running Man starring Arnold Schwarzenegger depicts an autocratic government using computers to digitally replace the faces of actors with those of wanted fugitives to make it appear the fugitives had been neutralized.
- In the 1992 techno-thriller A Philosophical Investigation by Philip Kerr, "Wittgenstein", the main character and a serial killer, makes use of both a software similar to Deepfake and a virtual reality suit for having sex with an avatar of the female police lieutenant Isadora "Jake" Jakowicz assigned to catch him.
- The 1993 film Rising Sun starring Sean Connery and Wesley Snipes depicts another character, Jingo Asakuma, who reveals that a computer disc has digitally altered personal identities to implicate a competitor.
- Deepfake technology is part of the plot of the 2019 BBC One drama The Capture. The series follows British ex-soldier Shaun Emery, who is accused of assaulting and abducting his barrister. Expertly doctored CCTV footage is used to set him up and mislead the police investigating him.
- Al Davis vs. the NFL — The narrative structure of this 2021 documentary, part of ESPN's 30 for 30 documentary series, uses deepfake versions of the film's two central characters, both deceased—Al Davis, who owned the Las Vegas Raiders during the team's tenure in Oakland and Los Angeles, and Pete Rozelle, the NFL commissioner who frequently clashed with Davis.
- ^ Brandon, John (16 February 2018). "Terrifying high-tech porn: Creepy 'deepfake' videos are on the rise". Fox News. Retrieved 20 February 2018.
- ^ "Prepare, Don't Panic: Synthetic Media and Deepfakes". witness.org. Retrieved 25 November 2020.
- ^ a b Kietzmann, J.; Lee, L. W.; McCarthy, I. P.; Kietzmann, T. C. (2020). "Deepfakes: Trick or treat?". Business Horizons. 63 (2): 135–146. doi:10.1016/j.bushor.2019.11.006.
- ^ Schwartz, Oscar (12 November 2018). "You thought fake news was bad? Deep fakes are where truth goes to die". The Guardian. Retrieved 14 November 2018.
- ^ a b c d PhD, Sven Charleer (17 May 2019). "Family fun with deepfakes. Or how I got my wife onto the Tonight Show". Medium. Retrieved 8 November 2019.
- ^ "What Are Deepfakes & Why the Future of Porn is Terrifying". Highsnobiety. 20 February 2018. Retrieved 20 February 2018.
- ^ "Experts fear face swapping tech could start an international showdown". The Outline. Retrieved 28 February 2018.
- ^ Roose, Kevin (4 March 2018). "Here Come the Fake Videos, Too". The New York Times. ISSN 0362-4331. Retrieved 24 March 2018.
- ^ Schreyer, Marco; Sattarov, Timur; Reimer, Bernd; Borth, Damian (October 2019), Adversarial Learning of Deepfakes in Accounting, arXiv:1910.03810, Bibcode:2019arXiv191003810S
- ^ a b c Ghoshal, Abhimanyu (7 February 2018). "Twitter, Pornhub and other platforms ban AI-generated celebrity porn". The Next Web. Retrieved 9 November 2019.
- ^ a b Clarke, Yvette D. (28 June 2019). "H.R.3230 - 116th Congress (2019-2020): Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2019". www.congress.gov. Retrieved 16 October 2019.
- ^ a b c Harwell, Drew (12 June 2019). "Top AI researchers race to detect 'deepfake' videos: 'We are outgunned'". The Washington Post. Retrieved 8 November 2019.
- ^ Sanchez, Julian (8 February 2018). "Thanks to AI, the future of 'fake news' is being pioneered in homemade porn". NBC News. Retrieved 8 November 2019.
- ^ a b c Porter, Jon (2 September 2019). "Another convincing deepfake app goes viral prompting immediate privacy backlash". The Verge. Retrieved 8 November 2019.
- ^ a b Bregler, Christoph; Covell, Michele; Slaney, Malcolm (1997). "Video Rewrite: Driving Visual Speech with Audio". Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques. 24: 353–360. doi:10.1145/258734.258880. S2CID 2341707.
- ^ a b c Suwajanakorn, Supasorn; Seitz, Steven M.; Kemelmacher-Shlizerman, Ira (July 2017). "Synthesizing Obama: Learning Lip Sync from Audio". ACM Trans. Graph. 36 (4): 95:1–95:13. doi:10.1145/3072959.3073640. S2CID 207586187.
- ^ a b c Thies, Justus; Zollhöfer, Michael; Stamminger, Marc; Theobalt, Christian; Nießner, Matthias (June 2016). "Face2Face: Real-Time Face Capture and Reenactment of RGB Videos". 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE: 2387–2395. arXiv:2007.14808. doi:10.1109/CVPR.2016.262. ISBN 9781467388511. S2CID 206593693.
- ^ a b Farquhar, Peter (27 August 2018). "An AI program will soon be here to help your deepface dancing – just don't call it deepfake". Business Insider Australia. Retrieved 27 August 2018.
- ^ "Deepfakes for dancing: you can now use AI to fake those dance moves you always wanted". The Verge. Retrieved 27 August 2018.
- ^ Mirsky, Yisroel; Mahler, Tom; Shelef, Ilan; Elovici, Yuval (2019). CT-GAN: Malicious Tampering of 3D Medical Imagery using Deep Learning. pp. 461–478. ISBN 978-1-939133-06-9.
- ^ Mirsky, Yisroel; Lee, Wenke (12 May 2020). "The Creation and Detection of Deepfakes: A Survey". arXiv:2004.11138. doi:10.1145/3425780. S2CID 216080410.
- ^ Karnouskos, Stamatis (2020). "Artificial Intelligence in Digital Media: The Era of Deepfakes" (PDF). IEEE Transactions on Technology and Society. 1 (3): 1. doi:10.1109/TTS.2020.3001312. S2CID 221716206. Retrieved 9 July 2020.
- ^ a b c Cole, Samantha (24 January 2018). "We Are Truly Fucked: Everyone Is Making AI-Generated Fake Porn Now". Vice. Retrieved 4 May 2019.
- ^ Haysom, Sam (31 January 2018). "People Are Using Face-Swapping Tech to Add Nicolas Cage to Random Movies and What Is 2018". Mashable. Retrieved 4 April 2019.
- ^ "r/SFWdeepfakes". Reddit. Retrieved 12 December 2018.
- ^ Hathaway, Jay (8 February 2018). "Here's where 'deepfakes,' the new fake celebrity porn, went after the Reddit ban". The Daily Dot. Retrieved 22 December 2018.
- ^ "What is a Deepfake and How Are They Made?". Online Tech Tips. 23 May 2019. Retrieved 8 November 2019.
- ^ Robertson, Adi (11 February 2018). "I'm using AI to face-swap Elon Musk and Jeff Bezos, and I'm really bad at it". The Verge. Retrieved 8 November 2019.
- ^ "Deepfakes web | The best online faceswap app". Deepfakes web. Retrieved 21 February 2021.
- ^ "Faceswap is the leading free and Open Source multi-platform Deepfakes software". 15 October 2019 – via WordPress.
- ^ "DeepFaceLab is a tool that utilizes machine learning to replace faces in videos. Includes prebuilt ready to work standalone Windows 7,8,10 binary (look readme.md).: iperov/DeepFaceLab". 19 June 2019 – via GitHub.
- ^ Pangburn, D. J. (21 September 2019). "You've been warned: Full body deepfakes are the next step in AI-based human mimicry". Fast Company. Retrieved 8 November 2019.
- ^ Lyons, Kim (29 January 2020). "FTC says the tech behind audio deepfakes is getting better". The Verge.
- ^ "Audio samples from "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis"". google.github.io.
- ^ Jia, Ye; Zhang, Yu; Weiss, Ron J.; Wang, Quan; Shen, Jonathan; Ren, Fei; Chen, Zhifeng; Nguyen, Patrick; Pang, Ruoming; Moreno, Ignacio Lopez; Wu, Yonghui (2 January 2019). "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis". arXiv:1806.04558. Bibcode:2018arXiv180604558J.
- ^ "TUM Visual Computing: Prof. Matthias Nießner". www.niessnerlab.org.
- ^ "Full Page Reload". IEEE Spectrum: Technology, Engineering, and Science News.
- ^ "Contributing Data to Deepfake Detection Research".
- ^ Thalen, Mikael. "You can now deepfake yourself into a celebrity with just a few clicks". daily dot. Retrieved 3 April 2020.
- ^ Matthews, Zane. "Fun or Fear: Deepfake App Puts Celebrity Faces In Your Selfies". Kool1079. Retrieved 6 March 2020.
- ^ "Kanye West, Kim Kardashian and her dad: Should we make holograms of the dead?". BBC News. 31 October 2020. Retrieved 11 November 2020.
- ^ Modems, The (30 October 2020). "KANYE WEST GAVE KIM KARDASHIAN A HOLOGRAM OF HER FATHER FOR HER BIRTHDAY". themodems. Retrieved 11 November 2020.
- ^ "Parkland victim Joaquin Oliver comes back to life in heartbreaking plea to voters". adage.com. 2 October 2020. Retrieved 11 November 2020.
- ^ Zucconi, Alan (14 March 2018). "Understanding the Technology Behind DeepFakes". Alan Zucconi. Retrieved 8 November 2019.
- ^ a b c d e Kan, C. E. (10 December 2018). "What The Heck Are VAE-GANs?". Medium. Retrieved 8 November 2019.
- ^ a b "These New Tricks Can Outsmart Deepfake Videos—for Now". Wired. ISSN 1059-1028. Retrieved 9 November 2019.
- ^ Limberg, Peter (24 May 2020). "Blackmail Inflation". CultState. Retrieved 18 January 2021.
- ^ "For Kappy". Telegraph. 24 May 2020. Retrieved 18 January 2021.
- ^ a b c Dickson, E. J.; Dickson, E. J. (7 October 2019). "Deepfake Porn Is Still a Threat, Particularly for K-Pop Stars". Rolling Stone. Retrieved 9 November 2019.
- ^ a b c Roettgers, Janko (21 February 2018). "Porn Producers Offer to Help Hollywood Take Down Deepfake Videos". Variety. Retrieved 28 February 2018.
- ^ "The State of Deepfake - Landscape, Threats, and Impact" (PDF). Deeptrace. 1 October 2019. Retrieved 7 July 2020.
- ^ Goggin, Benjamin. "From porn to 'Game of Thrones': How deepfakes and realistic-looking fake videos hit it big". Business Insider. Retrieved 9 November 2019.
- ^ Lee, Dave (3 February 2018). "'Fake porn' has serious consequences". Retrieved 9 November 2019.
- ^ a b Cole, Samantha (19 June 2018). "Gfycat's AI Solution for Fighting Deepfakes Isn't Working". Vice. Retrieved 9 November 2019.
- ^ Zoe, Freni (24 November 2019). "Deepfake Porn Is Here To Stay". Medium.
- ^ a b Cole, Samantha; Maiberg, Emanuel; Koebler, Jason (26 June 2019). "This Horrifying App Undresses a Photo of Any Woman with a Single Click". Vice. Retrieved 2 July 2019.
- ^ Cox, Joseph (9 July 2019). "GitHub Removed Open Source Versions of DeepNude". Vice Media.
- ^ "pic.twitter.com/8uJKBQTZ0o". 27 June 2019.
- ^ a b c d "Wenn Merkel plötzlich Trumps Gesicht trägt: die gefährliche Manipulation von Bildern und Videos". az Aargauer Zeitung. 3 February 2018.
- ^ Patrick Gensing. "Deepfakes: Auf dem Weg in eine alternative Realität?".
- ^ Romano, Aja (18 April 2018). "Jordan Peele's simulated Obama PSA is a double-edged warning against fake news". Vox. Retrieved 10 September 2018.
- ^ Swenson, Kyle (11 January 2019). "A Seattle TV station aired doctored footage of Trump's Oval Office speech. The employee has been fired". The Washington Post. Retrieved 11 January 2019.
- ^ Christopher, Nilesh (18 February 2020). "We've Just Seen the First Use of Deepfakes in an Indian Election Campaign". Vice. Retrieved 19 February 2020.
- ^ "#TellTheTruthBelgium". Extinction Rebellion Belgium. Retrieved 21 April 2020.
- ^ Holubowicz, Gerald (15 April 2020). "Extinction Rebellion s'empare des deepfakes". Journalism.design (in French). Retrieved 21 April 2020.
- ^ O'Sullivan, Donie. "Congress to investigate deepfakes as doctored Pelosi video causes stir". CNN. Retrieved 9 November 2019.
- ^ Katerina Cizek, William Uricchio, and Sarah Wolozin: Collective Wisdom | Massachusetts Institute of Technology 
- ^ ANSA | Ornella Muti in cortometraggio a Firenze
- ^ Kemp, Luke (8 July 2019). "In the age of deepfakes, could virtual actors put humans out of business?". The Guardian. ISSN 0261-3077. Retrieved 20 October 2019.
- ^ Radulovic, Petrana (17 October 2018). "Harrison Ford is the star of Solo: A Star Wars Story thanks to deepfake technology". Polygon. Retrieved 20 October 2019.
- ^ Winick, Erin. "How acting as Carrie Fisher's puppet made a career for Rogue One's Princess Leia". MIT Technology Review. Retrieved 20 October 2019.
- ^ a b "High-Resolution Neural Face Swapping for Visual Effects | Disney Research Studios". Retrieved 7 October 2020.
- ^ a b "Disney's deepfake technology could be used in film and TV". Blooloop. Retrieved 7 October 2020.
- ^ A, Jon Lindley (2 July 2020). "Disney Ventures Into Bringing Back 'Dead Actors' Through Facial Recognition". Tech Times. Retrieved 7 October 2020.
- ^ C, Kim (22 August 2020). "Coffin Dance and More: The Music Memes of 2020 So Far". Music Times. Retrieved 26 August 2020.
- ^ Sholihyn, Ilyas (7 August 2020). "Someone deepfaked Singapore's politicians to lip-sync that Japanese meme song". AsiaOne. Retrieved 26 August 2020.
- ^ Damiani, Jesse. "Chinese Deepfake App Zao Goes Viral, Faces Immediate Criticism Over User Data And Security Policy". Forbes. Retrieved 18 November 2019.
- ^ Porter, Jon (2 September 2019). "Another convincing deepfake app goes viral prompting immediate privacy backlash". The Verge. Retrieved 18 November 2019.
- ^ "Ahead of Irish and US elections, Facebook announces new measures against 'deepfake' videos". Independent.ie. Retrieved 7 January 2020.
- ^ Reuters, 15 July 2020, Deepfake Used to Attack Activist Couple Shows New Disinformation Frontier
- ^ 972 Magazine, 12 August 2020, "‘Leftists for Bibi’? Deepfake Pro-Netanyahu Propaganda Exposed: According to a Series of Facebook Posts, the Israeli Prime Minister is Winning over Left-Wing Followers--Except that None of the People in Question Exist"
- ^ The Seventh Eye, 9 June 2020, הפורנוגרפיה של ההסתהתומכי נתניהו ממשיכים להפיץ פוסטים מזויפים בקבוצות במדיה החברתית • לצד הטרלות מעלות גיחוך מופצות תמונות שקריות על מנת להגביר את השנאה והפילוג בחברה הישראלית
- ^ Statt, Nick (5 September 2019). "Thieves are now using AI deepfakes to trick companies into sending them money". Retrieved 13 September 2019.
- ^ Damiani, Jesse. "A Voice Deepfake Was Used To Scam A CEO Out Of $243,000". Forbes. Retrieved 9 November 2019.
- ^ "Weaponised deep fakes: National security and democracy on JSTOR". www.jstor.org. Retrieved 21 October 2020.
- ^ a b "Perfect Deepfake Tech Could Arrive Sooner Than Expected". www.wbur.org. Retrieved 9 November 2019.
- ^ a b c Bateman, Jon (2020). "Summary". Deepfakes and Synthetic Media in the Financial System: 1–2.
- ^ a b c "Deepfake Putin is here to warn Americans about their self-inflicted doom". MIT Technology Review. Retrieved 7 October 2020.
- ^ Towers-Clark, Charles. "Mona Lisa And Nancy Pelosi: The Implications Of Deepfakes". Forbes. Retrieved 7 October 2020.
- ^ a b "The rise of the deepfake and the threat to democracy". the Guardian. Retrieved 3 November 2020.
- ^ Fagan, Kaylee. "A viral video that appeared to show Obama calling Trump a 'dips---' shows a disturbing new trend called 'deepfakes'". Business Insider. Retrieved 3 November 2020.
- ^ Chandler, Simon. "Why Deepfakes Are A Net Positive For Humanity". Forbes. Retrieved 3 November 2020.
- ^ a b "Seeing AI App from Microsoft". www.microsoft.com. Retrieved 3 November 2020.
- ^ a b c d "Help us shape our approach to synthetic and manipulated media". blog.twitter.com. Retrieved 7 October 2020.
- ^ "TechCrunch". TechCrunch. Retrieved 7 October 2020.
- ^ a b "Deepfake Detection Challenge Results: An open initiative to advance AI". ai.facebook.com. Retrieved 7 October 2020.
- ^ a b Paul, Katie (4 February 2020). "Twitter to label deepfakes and other deceptive media". Reuters. Retrieved 7 October 2020.
- ^ a b c d 18 June, Kara Manke|; 201920 June; 2019 (18 June 2019). "Researchers use facial quirks to unmask 'deepfakes'". Berkeley News. Retrieved 9 November 2019.CS1 maint: numeric names: authors list (link)
- ^ "Join the Deepfake Detection Challenge (DFDC)". deepfakedetectionchallenge.ai. Retrieved 8 November 2019.
- ^ "Exposing GAN-Generated Faces Using Inconsistent Corneal Specular Highlights" (PDF). arxiv.org. Retrieved 1 April 2021.
- ^ a b c "The Blockchain Solution to Our Deepfake Problems". Wired. ISSN 1059-1028. Retrieved 9 November 2019.
- ^ a b Leetaru, Kalev. "Why Digital Signatures Won't Prevent Deep Fakes But Will Help Repressive Governments". Forbes. Retrieved 17 February 2021.
- ^ a b Cole, Samantha (11 June 2019). "This Deepfake of Mark Zuckerberg Tests Facebook's Fake Video Policies". Vice. Retrieved 9 November 2019.
- ^ a b c Cole, Samantha (6 February 2018). "Pornhub Is Banning AI-Generated Fake Porn Videos, Says They're Nonconsensual". Vice. Retrieved 9 November 2019.
- ^ a b Cole, Samantha (31 January 2018). "AI-Generated Fake Porn Makers Have Been Kicked Off Their Favorite Host". Vice. Retrieved 18 November 2019.
- ^ a b Cole, Samantha (6 February 2018). "Twitter Is the Latest Platform to Ban AI-Generated Porn". Vice. Retrieved 8 November 2019.
- ^ Cole, Samantha (11 December 2017). "AI-Assisted Fake Porn Is Here and We're All Fucked". Vice. Retrieved 19 December 2018.
- ^ Böhm, Markus (7 February 2018). ""Deepfakes": Firmen gehen gegen gefälschte Promi-Pornos vor". Spiegel Online. Retrieved 9 November 2019.
- ^ barbara.wimmer. "Deepfakes: Reddit löscht Forum für künstlich generierte Fake-Pornos". futurezone.at (in German). Retrieved 9 November 2019.
- ^ online, heise. "Deepfakes: Auch Reddit verbannt Fake-Porn". heise online (in German). Retrieved 9 November 2019.
- ^ "Reddit verbannt Deepfake-Pornos - derStandard.de". DER STANDARD (in German). Retrieved 9 November 2019.
- ^ Robertson, Adi (7 February 2018). "Reddit bans 'deepfakes' AI porn communities". The Verge. Retrieved 9 November 2019.
- ^ Price, Rob (27 January 2018). "Discord just shut down a chat group dedicated to sharing porn videos edited with AI to include celebrities". Business Insider Australia. Retrieved 28 November 2019.
- ^ "Twitter bans 'deepfake' AI-generated porn". Engadget. Retrieved 28 November 2019.
- ^ a b Harrell, Drew. "Fake-porn videos are being weaponized to harass and humiliate women: 'Everybody is a potential target'". The Washington Post. Retrieved 1 January 2019.
- ^ Gilmer, Damon Beres and Marcus. "A guide to 'deepfakes,' the internet's latest moral crisis". Mashable. Retrieved 9 November 2019.
- ^ a b "Facebook has promised to leave up a deepfake video of Mark Zuckerberg". MIT Technology Review. Retrieved 9 November 2019.
- ^ Sasse, Ben (21 December 2018). "S.3805 - 115th Congress (2017-2018): Malicious Deep Fake Prohibition Act of 2018". www.congress.gov. Retrieved 16 October 2019.
- ^ "'Deepfake' revenge porn is now illegal in Virginia". TechCrunch. Retrieved 16 October 2019.
- ^ Brown, Nina Iacono (15 July 2019). "Congress Wants to Solve Deepfakes by 2020. That Should Worry Us". Slate Magazine. Retrieved 16 October 2019.
- ^ a b "Bill Text - AB-602 Depiction of individual using digital or electronic technology: sexually explicit material: cause of action". leginfo.legislature.ca.gov. Retrieved 9 November 2019.
- ^ a b "Bill Text - AB-730 Elections: deceptive audio or visual media". leginfo.legislature.ca.gov. Retrieved 9 November 2019.
"China seeks to root out fake news and deepfakes with new online content rules". Reuters.com. Reuters. 29 November 2019. Retrieved 17 December 2019.
Statt, Nick (29 November 2019). "China makes it a criminal offense to publish deepfakes or fake news without disclosure". The Verge. Retrieved 17 December 2019.
- ^ Call for upskirting bill to include 'deepfake' pornography ban The Guardian
- ^ https://cyber.gc.ca/sites/default/files/publications/tdp-2019-report_e.pdf see page 18
- ^ Bogart, Nicole (10 September 2019). "How deepfakes could impact the 2019 Canadian election". Federal Election 2019. Retrieved 28 January 2020.
- ^ "What Can The Law Do About Deepfake". mcmillan.ca. Retrieved 28 January 2020.
- ^ "The US military is funding an effort to catch deepfakes and other AI trickery". MIT Technology Review. Retrieved 7 October 2020.
- ^ a b "DARPA Is Taking On the Deepfake Problem". Nextgov.com. Retrieved 7 October 2020.
- ^ a b c "Media Forensics". www.darpa.mil. Retrieved 7 October 2020.
- ^ "Picaper". Internet Speculative Fiction Database. Retrieved 9 July 2019.
- ^ Philip Kerr, A Philosophical Investigation, ISBN 978-0143117537
- ^ Bernal, Natasha (8 October 2019). "The disturbing truth behind The Capture and real life deepfakes". The Telegraph. Retrieved 24 October 2019.
- ^ Crawley, Peter (5 September 2019). "The Capture: A BBC thriller of surveillance, distortion and duplicity". The Irish Times. Retrieved 24 October 2019.
- ^ "ESPN Films Latest 30 for 30 Documentary Al Davis vs. The NFL to Premiere February 4" (Press release). ESPN. 15 January 2021. Retrieved 5 February 2021. CS1 maint: discouraged parameter (link)
- ^ Sprung, Shlomo (1 February 2021). "ESPN Documentary 'Al Davis Vs The NFL' Uses Deepfake Technology To Bring Late Raiders Owner Back To Life". Forbes. Retrieved 4 February 2021. CS1 maint: discouraged parameter (link)