|
---|
This talk will present the speaker's efforts over the last decade, ranging from 1) more on reasoning beyond appearance for visual question answering, image/video captioning tasks, and their evaluation, through 2) counterfactual reasoning about implicit physical properties, to 3) their roles in automated mobility. The talk will also feature the Active Perception Group (APG)’s previous and ongoing projects (NSF RI, NRI, CPS and SaTC, DARPA KAIROS and GAILA-E, and Arizona IAM) addressing emerging challenges of the nation in automated mobility and intelligent transportation domains, at the ASU School of Computing and Augmented Intelligence (SCAI).
Yezhou (YZ) Yang is an Associate Professor and a Fulton Entrepreneurial Professor in the School of Computing and Augmented Intelligence (SCAI) at Arizona State University. He founds and directs the ASU Active Perception Group, and serves as the topic lead (situation awareness) at the Institute of Automated Mobility, Arizona Commerce Authority. He also serves as an area lead at Advanced Communications Technologies (ACT, a Science and Technology Center under the New Economy Initiative, Arizona). His work includes exploring visual primitives and representation learning in visual (and language) understanding, grounding them by natural language and high-level reasoning over the primitives for intelligent systems, secure/robust AI, and fair V&L model evaluation. Yang is a recipient of the Qualcomm Innovation Fellowship 2011, the NSF CAREER award 2018, and the Amazon AWS Machine Learning Research Award 2019. He receives his Ph.D. from the University of Maryland at College Park, and his B.E. from Zhejiang University, China. He is a co-founder of ARGOS Vision.