Our research experts

Liu Ren, Ph.D.

Bosch human machine interaction (HMI) – intuitive, interactive, intelligent, and “Invented for life”

“Empowered by domain-specific artificial intelligence, the intuitive, interactive and intelligent HMI solutions we develop enable inspiring user experiences for Bosch products and services to improve quality of life.”

Liu Ren, Ph.D.

I am the Global Head and Chief Scientist of HMI at Bosch Research in Silicon Valley and responsible for shaping strategic direction and developing cutting-edge HMI technologies such as AR, visual analytics, NLP, conversational AI, and human factors. I further oversee HMI research activities of teams in the USA, Germany, and China and have won the Bosch North America Inventor of the Year Award for 3D maps (2016), Best Paper Award (2018), and Honorable Mention Award (2016) for big data visual analytics in IEEE Visualization.

Curriculum vitae

Carnegie Mellon University (USA)


CS PhD graduate, vision-based performance interface, machine learning for human motion capture, analysis, and synthesis

Mitsubishi Electric Research Laboratories (USA)


CS PhD intern, computer graphics, real-time rendering, and scientific visualization

Zhejiang University (China)


CS MS. graduate, AI-assisted computer aided design

Selected publications

  • Publications

    D. Liu et al. (2018)

    TPFlow: Progressive partition and multidimensional pattern extraction for large-scale spatio-temporal data analysis
    • D. Liu, P. Xu, L. Ren
    • IEEE Visualization (2018)
    • IEEE Transactions on Visualization & Computer Graphics
    • Best Paper Award
  • Publications

    B. P. W. Babu et al. (2018)

    On exploiting per-pixel motion conflicts to extract secondary motions
    • B. P. W. Babu, Z. Yan, M. Ye, L. Ren
    • IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (2018)
  • Publications

    A. Bilal et al. (2018)

    Do convolutional neural networks learn class hierarchy?
    • A. Bilal, A. Jourabloo, M. Ye, X. Liu, L. Ren
    • IEEE Visualization (2017)
    • IEEE Transactions on Visualization and Computer Graphics, vol. 24, issue 1, pp. 152-162
  • Publications

    Y. Chen et al. (2018)

    Sequence synopsis: Optimize visual summary of temporal event data
    • Y. Chen, P. Xu, L. Ren
    • IEEE Visualization (2017)
    • IEEE Transactions on Visualization and Computer Graphics, vol. 24, issue 1, pp. 45-55
  • Publications

    Z. Yan et al. (2017)

    Dense visual SLAM with probabilistic surfel map
    • Z. Yan, M. Ye, L. Ren
    • Mixed and Augmented Reality (ISMAR) (2017)
    • IEEE International Symposium (2017)
    • IEEE Transactions on Visualization and Computer Graphics, vol. 23, issue 11
  • Publications

    A. Bilal & L. Ren (2017)

    Powerset: A comprehensive visualization of set intersections
    • IEEE Visualization (2016)
    • IEEE Transactions on Visualization and Computer Graphics, vol. 23, issue 1, pp. 361-370
  • Publications

    P. Xu et al. (2017)

    ViDX: Visual diagnostics of assembly line performance in smart factories
    • P Xu, H. Mei, L. Ren, W. Chen
    • IEEE Visualization (2016)
    • IEEE Transactions on Visualization and Computer Graphics, vol. 23, issue 1, pp. 291-300
    • Best Paper Honorable Mention Award
  • Publications

    A. Jourabloo et al. (2017)

    Pose-invariant face alignment with a single CNN
    • A. Jourabloo, M. Ye, X. Liu, L. Ren
    • IEEE International Conference on Computer Vision (ICCV) (2017), pp. 3219-3228
  • Publications

    C. Du et al. (2016)

    Edge snapping-based depth enhancement for dynamic occlusion handling in augmented reality
    • C. Du, Y. Chen, M. Ye, L. Ren
    • IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (2016), pp. 54-62
  • Publications

    M. Ye et al. (2011)

    Accurate 3d pose estimation from a single depth image
    • M. Ye, X. Wang, R. Yang, L. Ren, M. Pollefeys
    • IEEE International Conference on Computer Vision (ICCV) (2011), pp. 731-738
  • Publications

    X. Huang et al. (2009)

    Image deblurring for less intrusive iris capture
    • X. Huang, L. Ren, R. Yang
    • IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2009), pp. 1558-1565
  • Publications

    L. Ren et al. (2005)

    A Data-Driven Approach to Quantifying Natural Human Motion
    • L. Ren, A. Patrick, A. Efros, J. Hodgins, J. Rehg
    • ACM Transactions on Graphics (SIGGRAPH 2005), vol. 24, issue 3, pp. 1090-1097
  • Publications

    L. Ren et al. (2005)

    Learning Silhouette Features for Control of Human Motion
    • L. Ren, G. Sharknarovich, J. Hodgins, H. Pfister, P. Viola
    • ACM Transactions on Graphics (SIGGRAPH 2004 Recommendation), vol. 24, issue 4, pp. 1303-1331
  • Publications

    L. Ren et al. (2002)

    Object Space EWA Surface Splatting: A Hardware Accelerated Approach to High Quality Point Rendering
    • L. Ren, H. Pfister, M. Zwicker
    • Computer Graphics Forum, vol. 21, issue 3, pp. 461-470
    • EUROGRAPHICS 2002, Best Paper Nominee

Interview with Liu Ren, Ph.D.

Global Head and Chief Scientist for AI Empowered Human Machine Interaction Technologies and Systems

Please tell us what fascinates you most about research.
HMI is integral to the everyday human experience. This means that conducting research on HMI-related topics like augmented reality, smart wearables, visual analytics, natural language processing, conversational AI, and human factors could lead to great impact on our everyday lives, which is very exciting to me. The research outcome is always tangible, and I can feel the inspiring or sometimes emotional user experience of my research results in fascinating application areas such as smart home, car infotainment, autonomous driving, robotics, and even Industry 4.0. The requirements of HMI research are often characterized by three ‘I’s: intuitive, interactive, and intelligent. However, I think it also has a 4th ‘I’ — international — because different countries, cultures, and markets sometimes have very different expectations on these HMI products. Isn’t that interesting?

What makes research done at Bosch so special?
First of all, Bosch has a global setup. Working in our research unit in Silicon Valley, the hub of HMI innovation in the world, gives our researchers an opportunity to engage the Silicon Valley eco-system to identify and shape early trends, as well as drive innovation in addressing emerging and undiscovered HMI needs to impact the world and lead Bosch to the future. Apart from a global setup, Bosch also has a diversified product portfolio which allows our researchers to drive innovation in building sustainable HMI solutions that are customer-centric and market driven, leading to a real-world impact that goes beyond striving for excellent scientific impact. But this does not mean we focus on the short-term; Bosch has a unique position to commit to long-term research that fits with our company strategy because we are a private company. In contrast, investors or stakeholders in public trading companies could sometimes turn research from the most fascinating job to the most unstable one.

What research topics are you currently working on at Bosch?
Research is about breadth and depth. While HMI research has a broad scope on the one hand, they share something in common on the other. For example, most of HMI topic areas need to deal with domain-specific AI technologies and user experience requirements. As Global Head of HMI research, I mainly work closely with my global teams to develop research strategies and roadmaps for different HMI topic areas. In other words, I decide what we shall do or shall not do – a challenging task that needs a deep understanding of technology trends and limitations, market situation, business needs, and resource constraints. As Chief Scientist, I focus my research more on visual computing, an HMI topic area that is closely related to computer vision, computer graphics/visualization, and machine learning – which also happens to be my favorite HMI topic. My recent focus includes augmented reality, big data visual analytics, and smart wearables with important applications in car repair assistance, autonomous driving, security systems, and Industry 4.0. In addition to technical transfers for products, some of my research outcomes have been summarized in some recent award-winning publications in the corresponding top venues.

What are the biggest scientific challenges in your field of research?
I see three major challenges in HMI research. First, truly understanding users is the key in enabling a superior user experience for an intelligent HMI system (e.g., smart speakers). This is still a long-standing research problem as it is still very challenging to accurately understand the user’s intention, behavior, and emotion, etc. from typical HMI input modalities such as speech, audio, gesture, and visuals. Second, figuring out how to enable intuitive user experience for HMI in the wild still remains a big challenge. Most HMI solutions work relatively well in controlled environments (e.g., AR systems in quiet and indoor environments) but lack robustness or scalability in uncontrolled environments (e.g., noisy outdoor environments), which negatively affects the usability of some of the existing HMI solutions for wider adoption. Finally, in the era of AI, improving human trust in the black box AI system for practical use could be an HMI challenge. In the research community, recent efforts in using visual analytics that combine data visualization, user interaction, and data analytics to improve the interpretability of AI have received quite a bit of attention.

How do the results of your research become part of solutions "Invented for life"?
Creating real-world impact out of HMI research is always exciting. One such highlight is 3D artMap. As the world’s first artistic 3D map for navigation, 3D artMap highlights map importance with artistic looks for easy orientation and personalized navigation experience. It has been adopted as a part of the automotive industry standard and is currently being used in several in-car navigation products. Another example is Bosch Intelligent Glove (BIG), a recent highlight in the industry 4.0 domain. BIG is a smart sensor glove that can improve production quality and efficiency, therefore reducing manufacturing costs based on our unique fine finger motion recognition and analysis algorithms. BIG was pilot-launched in several Bosch plants in China, as well as recently winning “The World’s Top 10 Industry 4.0 Innovation Award” from the Chinese Association of Science and Technology. It was honored alongside Industry 4.0 innovations from major global players such as Siemens and GE. For commercial success, sometimes it is also crucial to identify a high-potential research topic at the right time. I started visual analytics research 3 years ago. Together with my colleagues, we have developed several cutting-edge solutions that not only received best paper awards from academia, but were also requested by our business units for their product solutions because improving the interpretability of AI and big data is becoming increasingly important now.

Liu Ren, Ph.D.
Liu Ren, Ph.D.

Get in touch with me

Liu Ren, Ph.D.
Global Head and Chief Scientist for AI Empowered Human Machine Interaction Technologies and Systems

Share this on: