Microsoft introduces deepfake-detecting technology:


The term deepfake means those videos where AI software is used to manipulate visuals, for example replacing one person's face with another individual.The process worked by feeding a computer lots of still images of one person and video footage of another. Software then used this to generate a new video featuring the former's face in the place of the latter's, with matching expressions, lip-sync and other movements.Since then, the process has been simplified - opening it up to more users - and now requires fewer photos to work.Some apps exist that require only a single selfie to substitute a film star's face with that of the user within clips from Hollywood movies.But there are concerns the process can also be abused to create misleading clips, in which a prominent figure is made to say or act in a way that never happened, for political or other gain.Microsoft's AI looks at those tiny imperfections of a deepfake image that is undetectable by the human eye. 

Microsoft has teamed up with Project Origin, which includes the BBC and the New York Times, for a trial of the new tech. Microsoft said "In the months ahead, we hope to broaden work in this area to even more technology companies, news publishers and social media companies."The tool, called Video Authenticator, provides what Microsoft calls “a percentage chance, or confidence score” that the media has been artificially manipulated.

“In the case of a video, it can provide this percentage in real-time on each frame as the video plays,” it writes in a blog post announcing the tech. “It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.”Microsoft is trying its technology to reduce and omit the misinformation during the 2020 presidential campaign and election.



  Author:-Nischal Karki

Comments