Skip to main content
Our research experts

Liu Ren, Ph.D.

Bosch HMI– Evolving from Human Machine Interaction to Human-Machine Intelligence

“With integrated human-machine intelligence, we enable intelligent and trustworthy AIoT products and services with inspiring user experience to improve quality of life.”
Liu Ren, Ph.D.

I am the Vice President and Chief Scientist of Integrated Human- Machine Intelligence (HMI) at Bosch Research in North America. I am responsible for shaping strategic directions and developing cutting-edge technologies in AI focusing big data visual analytics, explainable AI, mixed reality/AR, computer perception, NLP, conversational AI, audio analytics, wearable analytics and so on for AIoT application areas such as autonomous driving, car infotainment and advanced driver assistance systems (ADAS), Industry 4.0, smart home/building solutions, and robotics, etc. As the responsible global head, I oversee these research activities for teams in the Silicon Valley (U.S.), Pittsburgh (U.S.), and Renningen (Germany). I have won the Bosch North America Inventor of the Year Award for 3D maps (2016), Best Paper Award (2018, 2020), and Honorable Mention Award (2016) for big data visual analytics in IEEE Visualization.

Curriculum vitae

  1. CS PhD graduate, vision-based performance interface, machine learning for human motion capture, analysis, and synthesis, Carnegie Mellon University (USA)
  2. CS PhD intern, computer graphics, real-time rendering, and scientific visualization, Mitsubishi Electric Research Laboratories (USA)
  3. CS MS. graduate, AI-assisted computer aided design, Zhejiang University (China)

Selected publications

  • Publications

    L. Gou et al. (2020)

    VATLD: A Visual Analytics System to Assess, Understand and Improve Traffic Light Detection
    • L. Gou, L. Zou, N. Li, M. Hofmann, A. K. Shekar, A. Windt, L. Ren
    • EEE Visualization (VAST) (2020)
    • IEEE Transactions on Visualization and Computer Graphics 2021
    • Best Paper Award
  • Publications

    L. Ren (2020)

    More Transparent AI with Visual Analytics
    • Bosch Digitial Annual Book 2019
  • Publications

    L. Ren (2019)

    Opening the Black Box of Automotive AI – A Visual Analytics Approach
    • Auto.AI USA 2019
    • Opening Keynote
  • Publications

    Y. Ming et al. (2019)

    ProtoSteer: Steering Deep Sequence Model with Prototypes
    • Y. Ming, P. Xu, F. Cheng, H. Qu, L. Ren
    • IEEE Visualization (VAST) (2019)
    • IEEE Transactions on Visualization and Computer Graphics ( Volume: 26, Issue: 1, Jan. 2020)
  • Publications

    Y. Yang et al. (2019)

    Analytic Combined IMU Integration (ACI^2) For Visual Inertial Navigation
    • Y. Yang, B. P. Wisely Babu, C. Chen, G. Huang, L. Ren
    • ICRA 2020
    • 2020 IEEE International Conference on Robotics and Automation
  • Publications

    Y.Ming et al. (2019)

    Interpretable and Steerable Sequence Learning via Prototypes
    • Y. Ming, P. Xu, H. Qu, L. Ren
    • ACM SIG KDD 2019
    • ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
  • Publications

    D. Liu et al. (2018)

    TPFlow: Progressive partition and multidimensional pattern extraction for large-scale spatio-temporal data analysis
    • D. Liu, P. Xu, L. Ren
    • IEEE Visualization (2018)
    • IEEE Transactions on Visualization and Computer Graphics ( Volume: 25, Issue: 1, Jan. 2019)
    • Best Paper Award
  • Publications

    B. P. W. Babu et al. (2018)

    On exploiting per-pixel motion conflicts to extract secondary motions
    • B. P. W. Babu, Z. Yan, M. Ye, L. Ren
    • IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (2018)
  • Publications

    A. Bilal et al. (2018)

    Do convolutional neural networks learn class hierarchy?
    • A. Bilal, A. Jourabloo, M. Ye, X. Liu, L. Ren
    • IEEE Visualization (2017)
    • IEEE Transactions on Visualization and Computer Graphics, vol. 24, issue 1, pp. 152-162
  • Publications

    Y. Chen et al. (2018)

    Sequence synopsis: Optimize visual summary of temporal event data
    • Y. Chen, P. Xu, L. Ren
    • IEEE Visualization (2017)
    • IEEE Transactions on Visualization and Computer Graphics, vol. 24, issue 1, pp. 45-55
  • Publications

    Z. Yan et al. (2017)

    Dense visual SLAM with probabilistic surfel map
    • Z. Yan, M. Ye, L. Ren
    • Mixed and Augmented Reality (ISMAR) (2017)
    • IEEE International Symposium (2017)
    • IEEE Transactions on Visualization and Computer Graphics, vol. 23, issue 11
  • Publications

    A. Bilal & L. Ren (2017)

    Powerset: A comprehensive visualization of set intersections
    • IEEE Visualization (2016)
    • IEEE Transactions on Visualization and Computer Graphics, vol. 23, issue 1, pp. 361-370
  • Publications

    P. Xu et al. (2017)

    ViDX: Visual diagnostics of assembly line performance in smart factories
    • P Xu, H. Mei, L. Ren, W. Chen
    • IEEE Visualization (2016)
    • IEEE Transactions on Visualization and Computer Graphics, vol. 23, issue 1, pp. 291-300
    • Best Paper Honorable Mention Award
  • Publications

    A. Jourabloo et al. (2017)

    Pose-invariant face alignment with a single CNN
    • A. Jourabloo, M. Ye, X. Liu, L. Ren
    • IEEE International Conference on Computer Vision (ICCV) (2017), pp. 3219-3228
  • Publications

    C. Du et al. (2016)

    Edge snapping-based depth enhancement for dynamic occlusion handling in augmented reality
    • C. Du, Y. Chen, M. Ye, L. Ren
    • IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (2016), pp. 54-62
  • Publications

    M. Ye et al. (2011)

    Accurate 3d pose estimation from a single depth image
    • M. Ye, X. Wang, R. Yang, L. Ren, M. Pollefeys
    • IEEE International Conference on Computer Vision (ICCV) (2011), pp. 731-738
  • Publications

    X. Huang et al. (2009)

    Image deblurring for less intrusive iris capture
    • X. Huang, L. Ren, R. Yang
    • IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2009), pp. 1558-1565
  • Publications

    L. Ren et al. (2005)

    A Data-Driven Approach to Quantifying Natural Human Motion
    • L. Ren, A. Patrick, A. Efros, J. Hodgins, J. Rehg
    • ACM Transactions on Graphics (SIGGRAPH 2005), vol. 24, issue 3, pp. 1090-1097
  • Publications

    L. Ren et al. (2005)

    Learning Silhouette Features for Control of Human Motion
    • L. Ren, G. Sharknarovich, J. Hodgins, H. Pfister, P. Viola
    • ACM Transactions on Graphics (SIGGRAPH 2004 Recommendation), vol. 24, issue 4, pp. 1303-1331
  • Publications

    L. Ren et al. (2002)

    Object Space EWA Surface Splatting: A Hardware Accelerated Approach to High Quality Point Rendering
    • L. Ren, H. Pfister, M. Zwicker
    • Computer Graphics Forum, vol. 21, issue 3, pp. 461-470
    • EUROGRAPHICS 2002, Best Paper Nominee

Interview with Liu Ren, Ph.D.

VP and Chief Scientist of Integrated Human-Machine Intelligence

Liu Ren, Ph.D.

Please tell us what fascinates you most about research.
In the era of AI, our research in this domain is about how we combine machine intelligence with human intelligence to deliver impactful AIoT products and services with superior user experiences. AI research on topics such as mixed reality/AR, conversational AI, smart wearables, and so on could lead to great impact on our everyday lives with customer products. In addition, tackling AI challenges in big data visual analytics, explainable AI, NLP, audio analytics, etc. can help automate lots of labor intensive tasks for our workers and developers. The potential is very exciting to me. Our cutting-edge research outcomes can not only be presented in leading AI conferences, but more importantly, are tangible. They are unique selling points (USPs) that enable our fascinating products to succeed in our business areas - including autonomous driving, advanced driver assistance, smart home/buildings, car infotainment, smart manufacturing, robotics and so on.

What makes research done at Bosch so special?
First of all, Bosch has a global setup. Working in our research unit in Silicon Valley, the hub of AI and software innovations in the world, gives our researchers an opportunity to engage the Silicon Valley eco-system to identify and shape early trends, work with professors from top level universities such as Stanford and UC Berkley to advance core research, as well as drive innovation in addressing emerging and undiscovered business needs to impact the world and lead Bosch to the future. Apart from a global setup, Bosch also has a diversified product portfolio which allows our researchers to drive innovation in building sustainable AI solutions that are customer-centric and market driven, leading to a real-world impact that goes beyond striving for excellent scientific impact. But this does not mean we focus only on the short-term; Bosch has a unique position to commit to long-term research that fits with our company strategy because we are a private company that is much less influenced by fluctuations on the stock market.

What research topics are you currently working on at Bosch?
Research is about breadth and depth. While the research area of integrated human-machine intelligence (HMI) has a broad scope, different research applications in this field share something in common. Most of them need to deal with domain-specific AI technologies and user experience requirements. As the responsible Global Head, I mainly work closely with my global teams to develop research strategies and roadmaps for different AIoT topic areas. In other words, I decide what we will do or will not do – a challenging task that needs a deep understanding of technology trends and limitations, market situation, business needs, and resource constraints. As Chief Scientist, I focus my research more on visual computing, a domain-specific AI topic area that is closely related to computer vision, computer graphics, visualization, and machine learning – which also happens to be my favorite research topic. My recent research focus includes big data visual analytics, explainable AI, mixed reality/AR, 3D perception and smart wearables that are a core part of key products and services in autonomous driving V&V, cloud-based retail analytics, smart car repair assistance, car infotainment, smart measurement tools, and Industry 4.0. In particular, together with my team and partners, I helped shape the visual analytics and explainable AI research towards AIoT directions in the research community, and won three best paper (or honorable mention) awards at top CS conferences for this domain for Bosch Research recently.

What are the biggest scientific challenges in your field of research?
I see three major challenges in our research for AIoT. All are related to the scale-up needs of typical AIoT products and services. Firstly, truly understanding users is the key to enabling a superior user experience for AIoT products (e.g., smart speakers) for large scale deployment. This is still a long-standing research problem as it is still very challenging to accurately understand the user’s intention, behavior, and emotion, etc. from different input modalities such as speech, audio, gesture, and visuals. Second of all, figuring out how to enable intuitive UX for AIoT products and services in the wild still remains a big challenge. Most existing solutions work relatively well in controlled environments (e.g., AR systems in quiet indoor environments) but lack robustness or scalability in uncontrolled environments (e.g., noisy outdoor environments), which negatively affects the usability of some of the existing solutions for wider adoption. Finally, in addition to superior UX, trustworthy aspects could be another factor that hinders the wide adoption of an AIoT product or service as most AI systems run like a black box. Leveraging human intelligence to improve the interpretability and robustness of an AI system could help here. For example, visual analytics that combines representation learning (e.g., XAI), data visualization, and minimum user interaction is considered a promising approach to address this problem.

How do the results of your research become part of solutions "Invented for life"?
For the results of our research to become part of "invented for life" solutions, they must have real-world impact. One earlier highlight is 3D artMap. As the world’s first artistic 3D map for navigation, 3D artMap highlights map importance with artistic looks for easy orientation and a personalized navigation experience. It has been adopted as a part of the automotive industry standard and is currently being used in several in-car navigation products. Another example is Bosch Intelligent Glove (BIG), a recent highlight in the Industry 4.0 domain. BIG is a smart sensor glove that can improve production quality and efficiency, therefore reduce manufacturing cost based on our unique fine finger motion recognition and analysis algorithms. In addition to the successful SOP in China, BIG won “The World’s Top 10 Industry 4.0 Innovation Award” from the Chinese Association of Science and Technology recently. It was honored alongside Industry 4.0 innovations from major global organizations such as Siemens and GE. Finally, most of our recent research on visual analytics and explainable AI not only impacted our scientific community (e.g., three recent best paper or honorable mention awards at the top computer science conference for this field), but also became operational for our core AIoT applications focusing on mobility, consumer goods, and smart manufacturing.

Get in touch with me

Liu Ren, Ph.D.
VP and Chief Scientist of Integrated Human-Machine Intelligence

Share this on: