Pretty sure the video says it was already done (10% of cases). People were superimposing someone else's head on porn videos making people think such and such had a sex tape. It's not true, so the person that was character assassinated now has to fight this landslide of false accusations (which is what I assume the person talking does).
This has true potential to just take it to a newer, lower level and honestly I don't see a bottom to it.
What we need is the reflexive technology to identify such fakery.
That's what is being worked on. I just don't see it really working because, until people are willing to acknowledge that what they see may be a fabrication, they'll take it face value and be inclined to propagate it. I don't think the majority of people are going to seek to verify something before believing it and propagating it.
The only way tech can stop it is if it is implemented before publication of the content. For example, if YouTube scans videos for Deep Fakes before it will allow publication. That's akin to censorship though. Not to mention the issue of false positives.