清华大学电子工程系智能感知集成电路与系统实验室
集成智能感知实验室(iVip Lab)研究团队(位于清华大学电子工程系罗姆楼)致力于研究高能效感知计算集成平台。近年来,在近似计算、近似存储、模拟计算、“感算共融”和近传感/传感内计算等集成电路设计领域发表多篇国际会议,并拥有多项发明专利;相关技术为集成智能感知技术在机器人,穿戴式,物联网和移动终端等领域的应用探索了一条特色道路。本实验室致力于高效能机器感知集成电路设计和系统应用方向的相关算法设计、硬件架构设计和芯片设计验证工作;进一步探索智能(视觉、听觉和触觉等)传感器在机器人、穿戴式和边缘智能感知领域的系统应用形态。
方向 I:面向未来智能传感器系统的高能效感知计算架构和集成电路技术 Future Sensors: Energy-Efficient Perceptual Computing Architectures and Integrated Circuits for Future Smart Sensors
集成电路芯片是信息社会的基石。iVip实验室挑战现有的集成感知系统方案,设计面向未来智能传感器系统的高能效感知计算架构和集成电路;提出“感算共融”(Senputing: Sensing with Computing)的高能效智能感知集成处理架构技术。
后摩尔时代(Post-Moore Era)的高能效智能感知信号处理架构研究提出了“传感 + 计算”共融(Senputing: Sensing with Computing)的新型智能感知计算范式;支持将多种类型传感器(视觉、听觉和触觉等)与智能信号处理任务深度融合,采用“模拟-信息”转换方式的“物理计算”技术和近似计算技术,通过算法和集成硬件的联合设计优化,设计并实现面向视觉、听觉和触觉感知等多模态智能感知应用的超低功耗集成电路芯片,满足机器人、穿戴式和物联网等智能机器感知场景对微型化、智能化和高能效的苛刻要求。
方向 II:面向机器人,穿戴式和边缘感知计算领域的智能感知系统平台iSensing Systems: intelligent Sensing Systems for Robotics, Wearables and Edge Computing
iVip实验室基于先进的智能感知集成电路芯片技术,展开广泛的合作研究,实现多种智能感知系统平台。面向移动机器人系统(室内轮式机器人定位导航任务、机械臂智能抓取任务等),穿戴式系统(智能触觉手套,视听觉穿戴式交互系统等)和物联网边缘计算节点展开设计。
智能服务型机器人(Intelligent Service Robots)需要移动机器人系统通过多模态智能感知处理能力,提取环境动态语义信息,以适应长期的和场景变化的定位导航等应用需求。完成包括数据集采集发布,SLAM算法设计和硬件加速平台设计的完整研究体系:
(1)合作采集多种动态场景的定位导航数据集(OpenLORIS: a Lifelong SLAM Datasets),以体现环境/目标变化、拓扑变化和知识遗忘等特性;
(2)通过视觉语义融合等技术,设计适用于Lifelong长期动态场景的SLAM算法和系统;
(3)设计和实现高能效的SLAM硬件体系架构以满足实时和动态适应的智能服务型机器人定位导航需求。
Active Project
We are developing Artificial Perception Systems, especially visual perception devices for some embedded application scenarios to enhance environmental awareness of organisms and robots, as well as IoT devices for environment monitoring.
iEyes (intelligent Eyes) is our FPGA-based vision processing platform, on which some system architectures and algorithms would be evaluated, such as feature extractions with both conventional and prevailing machine learning methods. More, the target application of iEyes platform is location and navigation for various robots, currently.
For future systems of ISP and machine learning with even higher energy efficiency requirements, where the current FPGA/ASIC implementations could not achieve, we are developing Approximate Computing architecture to balance the energy efficiency and computing quality. We also present the Physical Computing architecture for such perception systems, which could extract useful information directly in the analog sensing signal and achieve 3~6 orders of energy-efficiency improvements.
Novel devices are also adopted. We are trying to implement the intelligent contact lens with flexible devices. Maybe some days later, this kind of system for agent Tom Cruise in Mission Impossible(IV/V) would become mission possible.
Past Project
LCVCodec: Low Complexity and Low Power Video Coding with CS, DVC and Blind Signal Seperation.
MCVP: High-Performance Video Processing with Multi-cores.
MemOpti: High-Performance and Low-Power Storage Methods of Massive Data for Integrated Digital Video Systems.
LPC 2: Circuit-Level Low-Power Design for On-Chip Clock Tree at 65nm/45nm Technology Nodes with PVT-Aware Features.
LPC 1: Circuit-Level Low-Power Techniques for SOC Design, including low power conditional pre-charge DFFs, low-swing interconnects, etc.