IEEE MIPR Keynote Speakers

John Apostolopoulos
John Apostolopoulos
CTO/VP of Enterprise Networking Business, Cisco
Talk Time: 8:30AM - 9:30AM 03/28 (Thursday)
Title: Machine Learning for Networked Multimedia Systems


It is an exciting time to work in multimedia and especially networked multimedia systems. Recent advances in both machine learning (ML) and networking provide new capabilities for networked multimedia systems. In addition, the network can provide a new source of data that multimedia experts can leverage to tackle a broader set of problems.

This talk will highlight three examples. First, we’ll examine Intent-Based Networking (a modern architecture for designing and operating a network) and how ML can be used to increase visibility, diagnose problems and identify associated remedies, and provide assurance on application performance for multimedia traffic such as video conferencing or wireless interactive AR/VR. Next, we’ll look at how the move from today’s Cloud-based ML to the promising approach of Distributed ML across Edge and Cloud can lead to improved scalability, reduced latency, and improved privacy for multimedia applications. Lastly, in the context of ever growing security threats, we examine how ML can be applied to address the challenge of malware sneaking in an encrypted flow. Specifically, how can we detect malware hidden in encrypted flows without requiring decryption of those flows. It is noteworthy that while ML is often associated with reducing privacy, this example showcases how an elegant application of ML can both preserve privacy and improve security, while requiring lower complexity than traditional approaches.


John Apostolopoulos is VP/CTO of Cisco’s Enterprise Networking Business (Cisco’s largest business) where he drives the technology and architecture direction in strategic areas for the business. This covers the broad Cisco portfolio including Intent-Based Networking (IBN), Internet of Things (IoT), wireless (ranging from Wi-Fi to emerging 5G), application-aware networking, multimedia networking, indoor-location-based services, connected car, machine learning and AI applied to the aforementioned areas, and deep learning for visual analytics.

Previously, John was Lab Director for the Mobile & Immersive Experience Lab at HP Labs. The MIX Lab conducted research on novel mobile devices and sensing, mobile client/cloud multimedia computing, immersive environments, video & audio signal processing, computer vision & graphics, multimedia networking, glasses-free 3D, next-generation plastic displays, wireless, and user experience design.

John received a number of honors and awards including IEEE Fellow, IEEE SPS Distinguished Lecturer, named “one of the world’s top 100 young (under 35) innovators in science and technology” (TR100) by MIT Technology Review, received a Certificate of Honor for contributing to the US Digital TV Standard (Engineering Emmy Award 1997), and his work on media transcoding in the middle of a network while preserving end-to-end security (secure transcoding) was adopted in the JPSEC standard. He has published over 100 papers, including receiving 5 best paper awards, and has about 75 granted US patents. John also has strong collaborations with the academic community and was a Consulting Associate Professor of EE at Stanford (2000-09), and frequently lecturers at MIT. He received his B.S., M.S., and Ph.D. in EECS from MIT.

Danny Lange
Danny Lange
Vice President of AI and Machine Learning at Unity Technologies
Talk Time: 8:30AM - 9:30AM 03/29 (Friday)
Title: On the Road to Artificial General Intelligence

Artificial intelligence imitates human intelligence, but we are still learning how modern artificial intelligence makes decisions. As AI goes through its own evolutionary process, the community is working to understand how to efficiently improve algorithms. Danny Lange walks you through the role of intelligence in biological evolution and learning, exploring the relationship between intelligence and the senses and demonstrating why a game engine with a spatial (3D) environment in conjunction with a physics engine (gravity, inertia, and collision) is the perfect virtual biodome for AI’s evolution—a controlled, self-sufficient ecosystem that closely replicates the natural outdoor environment. Along the way, Danny explores the scale and speed of simulations and explains how AI is able to learn from an interaction and improve over time as more and more interactions take place. He concludes by sharing new developments in reinforcement learning and the positive effect they will have on a variety of industries.

Dr. Danny Lange is Vice President of AI and Machine Learning at Unity Technologies where he leads multiple initiatives in the field of applied Artificial Intelligence. Unity is the creator of a flexible and high-performance end-to-end development platform used to create rich interactive 2D, 3D, VR and AR experiences.

Previously, Danny was Head of Machine Learning at Uber, where he led the efforts to build a highly scalable Machine Learning platform to support all parts of Uber’s business from the Uber App to self-driving cars. Before joining Uber, Danny was General Manager of Amazon Machine Learning providing internal teams with access to machine intelligence. He also launched an AWS product that offers Machine Learning as a Cloud Service to the public.

Prior to Amazon, he was Principal Development Manager at Microsoft where he led a product team focused on large-scale Machine Learning for Big Data. Danny spent 8 years on Speech Recognition Systems, first as CTO of General Magic, Inc., and then as founder of his own company, Vocomo Software. During this time, he was working on General Motor’s OnStar Virtual Advisor, one of the largest deployments of an intelligent personal assistant until Siri. Danny started his career as a Computer Scientist at IBM Research. Danny holds MS and Ph.D. degrees in Computer Science from the Technical University of Denmark. He is a member of ACM and IEEE Computer Society and has numerous patents to his credit.

Ruzena Bajcsy
Professor, University of California, Berkeley
Talk Time: 5:00 PM - 6 PM 03/30 (Saturday)
Title: Multimodal real time assessment of the driver state using, Visual, Acoustic and body motion observations.


Our basic hypothesis is that the driver in the car is exposed to different environmental stimuli coming both from the road as well as inside of the car. These stimuli are multimodal: Visual, acoustic and the motion of the drive modulated by the interaction of the car with the road surface. There are several questions that we need to resolve as well as to investigate the interplay between visual, acoustic, and motion data and the driver’s attention in order to utilize these multimodal data properly.

There are several questions that we need to resolve as well as to investigate the interplay between visual, acoustic, and motion data and the driver’s attention in order to utilize these multimodal data properly.

The visual data from the outside of the car provides the spatiotemporal assessment of this environment. On the other hand, the visual data inside of the car provides information, on the other occupants beside the driver, their behavior that may affect the driver’s state. Similarly, the acoustic information both from the outside and inside environment. The question for us is these two different modalities enhancing or diminishing the state of the driver (being either positive or negative). Similarly, it is the observed motion of the driver (and its seat). These two motions detected by a pressure sensor on the seat need to be decoupled and discriminate between the motion coming from the restlessness of the driver and/or coming from the roughness of the road.

In our presentation, we shall try to detangle these effects.


Ruzena Bajcsy received the Master’s and Ph.D. degrees in electrical engineering from Slovak Technical University, Bratislava, Slovak Republic, in 1957 and 1967, respectively, and the Ph.D. in computer science from Stanford University, Stanford, CA, in 1972. She is a Professor of Electrical Engineering and Computer Sciences and NEC chair holder at the University of California, Berkeley, and Director Emeritus of the Center for Information Technology Research in the Interest of Science (CITRIS). Prior to joining Berkeley, she was a professor of the Computer Science and information department at the University of Pennsylvania, Philadelphia. There is founded the GRASP( General Robotics and Active Perception)laboratory in 1979 which is flourishing now In 1999 she was appointed to be headed the Computer and Information Science and Engineering Directorate at the National Science Foundation .In 2001 after she finished her stay at NSF, she retired form University of Pennsylvania and joined the faculty at University of California, Berkeley. Dr. Bajcsy is a member of the National Academy of Engineering and the National Academy of Science Institute of Medicine as well as a Fellow of the Association for Computing Machinery (ACM) , fellow of IEEE and the American Association for Artificial Intelligence. In 2001, she received the ACM/Association for the Advancement of Artificial Intelligence Allen Newell Award. Since 2008 she is a member of the American Academy of Arts and Sciences. She is the recipient of the Benjamin Franklin Medal for Computer and Cognitive Sciences (2009) and the IEEE Robotics and Automation Award (2013) for her contributions in the field of robotics and automation.

She received the 2016 NAE Simon Ramo Founders Award for her life achievemnts.

Patrick Griffis
Patrick Griffis
Technology Vice President of Dolby Laboratories
Talk Time: 8:30 AM - 9:30 AM March 30 (Saturday)
Title: HDR: From Dream to Mainstream.


In this presentation, Patrick Griffis, SMPTE President and Chair of the SMPTE drafting group leading to a fundamental new Electro-Optic Transfer Function ( EOTF) based on the human visual model popularly nicknamed “PQ” will review some of the theory and resulting practice leading to the successful mainstream deployment of HDR in the market today in television, mobile devices and even computers. The presentation will include animations defining concepts such as “Perceptual Quantizer”, “Color Volume” and “Content Mapping” which are key to understanding HDR. He will also cover the market momentum in HDR from the recent Consumer Electronics Show.


As Technology Vice President in the CTO Office at Dolby, Patrick Griffis is charged with helping define future technology strategy for the company which includes identifying and tracking key technical trends, performing technical due diligence, and supporting advanced technology initiatives for the company. He has been an active company spokesperson on the topic of next generation imaging and in particular, “better pixels” - a term he coined for High Dynamic Range plus Wide Color Gamut and chaired the SMPTE ST 2084 drafting group which standardized the PQ curve.

Before joining Dolby, Pat spent 10 years at Microsoft leading global digital media standards strategy, including standardization of Windows Media video technology as an international SMPTE standard.

Prior to Microsoft, Pat spent 15 years at Panasonic in senior management positions, including Vice President of Strategic Product Development at Panasonic Broadcast where he drove Panasonic’s HDTV strategy for the US.

Pat started his career at RCA Consumer Electronics Division, earning eight patents in TV product design. A SMPTE Fellow, Pat currently serves as SMPTE President. He also serves as Vice President on the Board of the Ultra-HD Forum. Pat is Dolby’s Board representative in the UHD Alliance, as well as, Vice Chair of its Interoperability Working Group. Pat served two terms as President of the IEEE Consumer Electronics Society and is a member of the IBC Council, an industry executive advisory group to the IBC Board. He is also a member of the Academy of Digital TV Pioneers. Pat holds a BSEE degree from Tufts University and an MSEE from Purdue University.

© 2019. IEEE International Conference on Multimedia Information Processing and Retrieval (MIPR), All Rights Reserved.