2019 Machines + Media Working Group

Second Quarter Session
Tuesday, June 11 (9:00AM-12:00PM)

Hosted by Viacom: 1515 Broadway, New York, NY

Why Participate?

At the Machines + Media Working Group, you’ll meet like-minded, data-driven peers.
Join us for a session—to be held each quarter in 2019—to collaborate, share ideas and tricks of the trade, engage in joint projects, and work to build a repository of open source code that can help advance the media industry.

NYC Media Lab has built a community of industry executives and technologists, faculty, students, and entrepreneurs each pursuing applications of data science to media. Advances in artificial intelligence and machine learning, natural language processing, computer vision, and a variety of other technologies are among the most compelling developments for media companies, offering possibilities for the creation, production, distribution and monetization of media content and interactive experiences.

RSVP to attend

Agenda for june 11, 2019

9:00AM: Breakfast & Networking
Get to know the group members over coffee & breakfast.

9:30AM: Introduction and Overview
About the 2019 Machines + Media Working Group and ways to collaborate.

9:45AM: What's State of the Art?
Michael I Mandel, Associate Professor of Computer and Information Science at Brooklyn College and the CUNY Graduate Center, will lead a technical presentation on speech synthesis. Mandel works at the intersection of machine learning, signal processing, and psychoacoustics.

About the presentation:
Noise and reverberation are two of the biggest problems for voice communication technologies like conversational assistants, hearing prostheses, and mobile communication.  Traditional approaches mitigate these interferences by modifying the noisy signal, which leads to two problems: under-suppression of noise and over-suppression of speech. As an alternative, we extract from the noisy speech enough information to drive a speech synthesizer, which then generates a high quality, noise-free rendition of the speech from scratch to replace the original noisy speech.  In listening tests, the system's output is rated as being higher quality than state of the art speech enhancement methods and as producing more natural pronunciations than traditional text-to-speech systems.

10:30AM: What's Currently in Market?
Hear a technical presentation from a startup executive.

11:00AM: Roundtable Discussion: What’s Top of Mind?
Engage in a group discussion on top priorities for applications of data science in media.

12:00PM: Event closes

SECOND quarter session location

Viacom | 1515 Broadway, New York, NY 10036 (Google Maps)

 
Viacom_K-01.png
 
 

Working group Recap & Related links

Q1 Session

The first quarter session was held on Friday, February 22 at The Associated Press. The event included:

  • A research presentation from: Lydia Chilton, Professor in the Computer Science Department at Columbia University. Her work focuses on crowdsourcing and decomposing hard design problems so that people and machines can collaborate to solve them. She presented VisiBlends, a data system which uses computational techniques and natural language processing to auto-generate visual blends that can pass human visual object recognition.

  • A startup presentation from: Rungson Samroengraja, President & COO of Satisfi Labs. Satisfi Labs is a conversational AI platform. Its Answer Engine powers chatbots, voice experiences, messaging apps and website forums to provide an answer to all customer questions in real time.

  • A startup presentation from: Armando Kirwin, Co-Founder of Artie. Artie helps content creators bring virtual characters to life on mobile. Its network shares insights and analytics in real time, allowing creators to understand how consumers interact with each character.