Consolidated References
245 references
[1] Johansson, R. S., & Flanagan, J. R. (2009). Coding and use of tactile signals from the fingertips in object manipulation tasks. Nature Reviews Neuroscience, 10, 345-359. https://doi.org/10.1038/nrn2621 scholar
[2] Kandel, E. R., Schwartz, J. H., & Jessell, T. M. (2000). Principles of Neural Science (4th ed.). McGraw-Hill. scholar
[3] Dahiya, R. S., Metta, G., Valle, M., & Sandini, G. (2010). Tactile sensing: From humans to humanoids. IEEE Transactions on Robotics, 26(1), 1-20. https://doi.org/10.1109/TRO.2009.2033627 scholar
[4] Yuan, W., Dong, S., & Adelson, E. H. (2017). GelSight: High-resolution robot tactile sensors for estimating geometry and force. Sensors, 17(12), 2762. https://doi.org/10.3390/s17122762 scholar
[5] Lambeta, M., Chou, P.-W., Tian, S., Yang, B., Maloon, B., Most, V. R., ... & Calandra, R. (2020). DIGIT: A novel design for a low-cost compact high-resolution tactile sensor with application to in-hand manipulation. IEEE Robotics and Automation Letters, 5(3), 3838-3845. https://arxiv.org/abs/2005.14679 scholar
[6] Lambeta, M., Wu, T., Sengul, A., & Calandra, R. (2024). Digitizing touch with an artificial multimodal fingertip (Digit 360). arXiv preprint, arXiv:2411.02834. scholar
[7] Shaw, K., Agarwal, A., & Pathak, D. (2023). LEAP Hand: Low-cost, efficient, and anthropomorphic hand for robot learning. Robotics: Science and Systems (RSS). scholar
[8] Hogan, N. (1985). Impedance control: An approach to manipulation. Journal of Dynamic Systems, Measurement, and Control, 107(1), 1-24. https://doi.org/10.1115/1.3140702 scholar
[9] Billard, A., & Kragic, D. (2019). Trends and challenges in robot manipulation. Science, 364(6446), eaat8414. scholar
[10] Yin, Z.-H., Huang, B., Qin, Y., Chen, Q., & Wang, X. (2023). Rotating without seeing: Towards in-hand dexterity through touch. Robotics: Science and Systems (RSS). scholar
[11] Pitz, J., Röstel, L., Sievers, L., & Bäuml, B. (2024). Dextrous tactile in-hand manipulation using a modular reinforcement learning architecture. IEEE-RAS International Conference on Humanoid Robots. scholar
[12] Suresh, S., Si, Z., Anderson, S., Kaess, M., & Mukadam, M. (2024). NeuralFeels: Neural fields for visuotactile perception. Science Robotics, 9(86). scholar
[13] Yang, F., Feng, C., Chen, Z., Park, H., Wang, D., Dou, Y., ... & Wong, A. (2024). Binding touch to everything: Learning unified multimodal tactile representations. CVPR 2024. scholar
[14] Lepora, N. F. (2025). Tactile Robotics: Past and Future. arXiv preprint, arXiv:2512.01106. scholar
[15] Various. (2025). Recent advances in tactile sensing technologies for human-robot interaction: Current trends and future perspectives. Sensors International. https://doi.org/10.1016/j.sintl.2025.100345 scholar
[16] Zhao, Z., Li, W., Li, Y., Liu, T., Li, B., Wang, M., Du, K., Liu, H., Zhu, Y., Wang, Q., Althoefer, K., & Zhu, S.-C. (2025). Embedding high-resolution touch across robotic hands enables adaptive human-like grasping. Nature Machine Intelligence. https://doi.org/10.1038/s42256-025-01053-3 #39 scholar
[17] Zhao, C., Yu, Y., Ye, Z., Tian, Z., Zhang, Y., & Zeng, L.-L. (2025). Universal slip detection of robotic hand with tactile sensing. Frontiers in Neurorobotics, 19. https://doi.org/10.3389/fnbot.2025.1478758 scholar
[18] Lin, P., Huang, Y., Li, W., Ma, J., Xiao, C., & Jiao, Z. (2025). PP-Tac: Paper picking using omnidirectional tactile feedback in dexterous robotic hands. Robotics: Science and Systems (RSS) 2025. #12 scholar
[19] Cho, W., et al. (2025). Multi-dimensional tactile sensing for force/noise decoupling. ACS Applied Materials & Interfaces. scholar
[20] Hossain, M., et al. (2025). Multi-axis tactile for object orientation detection. Sensors and Actuators A: Physical. scholar
[21] Chen, C., Yu, Z., Choi, H., Cutkosky, M., & Bohg, J. (2025). DexForce: Extracting force-informed actions from kinesthetic demonstrations for dexterous manipulation. IEEE Robotics and Automation Letters. arXiv:2501.10356. #3 scholar
[22] Wang, S., She, Y., Romero, B., & Adelson, E. H. (2021). GelSight Wedge: Measuring high-resolution 3D contact geometry with a compact robot finger. IEEE ICRA 2021. https://doi.org/10.1109/ICRA48506.2021.9560783 scholar
[23] Agarwal, A., Mirzaee, M. A., Sun, X., & Yuan, W. (2025). A modularized design approach for GelSight family of vision-based tactile sensors. International Journal of Robotics Research. https://doi.org/10.1177/02783649251339680 scholar
[24] Bhirangi, R., Hellebrekers, T., Majidi, C., & Gupta, A. (2021). ReSkin: Versatile, replaceable, lasting tactile skins. arXiv preprint, arXiv:2111.00071. #13 scholar
[25] Bhirangi, R., Pattabiraman, V., Norcross, E., Shocher, A., & Pinto, L. (2024). AnySkin: Plug-and-play skin sensing for robotic touch. ICRA 2025. arXiv:2409.08276. scholar
[26] Sundaram, S., Kellnhofer, P., Li, Y., Zhu, J.-Y., Torralba, A., & Matusik, W. (2019). Learning the signatures of the human grasp using a scalable tactile glove. Nature, 569, 698-702. scholar
[27] Murphy, L., et al. (2025). Capacitive tactile sensing for teaching by demonstration. arXiv preprint. scholar
[28] Fang, J., et al. (2025). Force measurement technology of vision-based tactile sensor. Advanced Intelligent Systems (Wiley). https://doi.org/10.1002/aisy.202400290 scholar
[29] Yu, et al. (2025). Recent progress in tactile sensing and machine learning for texture perception in humanoid robots. Interdisciplinary Materials (Wiley). https://doi.org/10.1002/idm2.12233 scholar
[30] Various. (2025). Comprehensive review of tactile sensing technologies in space robotics. Chinese Journal of Aeronautics. https://doi.org/10.1016/j.cja.2025.01.031 scholar
[31] Various. (2025). Vision-based tactile sensor design using physically based rendering. Communications Engineering (Nature). https://doi.org/10.1038/s44172-025-00350-4 scholar
[32] Huang, B., Wang, Y., et al. (2024). 3D-ViTac: Learning fine-grained manipulation with visuo-tactile sensing. CoRL 2024. arXiv:2410.24091. scholar
[33] Zhang, N., Ren, J., Dong, Y., Gu, G., & Zhu, X. (2025). Soft robotic hand with tactile palm-finger coordination. Nature Communications, 16, 2395. https://doi.org/10.1038/s41467-025-57741-6 #40 scholar
[34] Gao, Y., Zhang, J., Zhang, H., et al. (2025). A neuromorphic robotic electronic skin with active pain and injury perception. PNAS. https://doi.org/10.1073/pnas.2520922122 scholar
[35] Various. (2025). Multisensory electronic skin with decoupled pressure-temperature sensing. PNAS. scholar
[36] Various. (2025). Recent advances in spike-based neural coding for tactile perception. Microsystems & Nanoengineering (Nature). https://doi.org/10.1038/s41378-025-01074-3 scholar
[37] Various. (2026). Bioinspired spiking architecture for energy-constrained touch encoding. Nature Communications. https://doi.org/10.1038/s41467-026-68858-7 scholar
[38] Yin, Z.-H., et al. (2025). Multi-dimensional tactile sensing for efficient robot learning. arXiv preprint. scholar
[39] Lach, L., et al. (2023). Tactile-based object orientation detection for blind manipulation. arXiv preprint. scholar
[40] Yin, J., Qi, H., Wi, Y., Kundu, S., Lambeta, M., Yang, W., Wang, C., Wu, T., Malik, J., & Hellebrekers, T. (2025). OSMO: Open-source tactile glove for human-to-robot skill transfer. arXiv preprint. arXiv:2512.08920. #18 scholar
[41] Albini, A., Kaboli, M., Cannata, G., & Maiolino, P. (2025). Representing data in robotic tactile perception — A review. arXiv preprint (submitted to IEEE Trans. Robot.). arXiv:2510.10804. scholar
[42] Choi, S. (Korea University). (2026). Commercial tactile sensor survey. https://www.notion.so/2026-03-26-Tactile-sensors-32f13b6f5ea48054a5e3f4cee07f00c6 scholar
[43] Choi, H., Kim, A., & Cutkosky, M. R. (2024). CoinFT: A compact and affordable capacitive six-axis force/torque sensor for robotic applications. IEEE Sensors Journal. https://coin-ft.github.io/ scholar
[44] Choi, H. (2026). Multimodal Data for Robot Manipulation: From Tactile Sensing to Scalable Human Demonstrations. SNU Data Science Seminar, Seoul National University. scholar
[45] Kim, U.-H., Lee, D.-H., Kim, Y.-B., Seok, D.-Y., & Choi, H.-R. (2017). A novel six-axis force/torque sensor for robotic applications. IEEE/ASME Trans. Mechatronics, 22, 1381-1391. scholar
[46] Palli, G., Moriello, L., Scarcia, U., & Melchiorri, C. (2014). Development of an optoelectronic 6-axis force/torque sensor. Sensors and Actuators A: Physical, 220, 333-346. scholar
[47] Fernandez, A. J., Weng, H., Umbanhowar, P. B., & Lynch, K. M. (2021). ViSiFlex: A low-cost compliant tactile fingertip for force, torque, and contact sensing. IEEE RA-L, 6(2), 3009-3016. scholar
[48] Winston, C., Choi, H., Jitosho, R., Zhakypov, Z., Palmer, J. E., Cutkosky, M. R., & Okamura, A. M. (2025). Fourigami: A 4-DoF force-controlled origami finger pad haptic device. IEEE Trans. Robotics. scholar
[49] Yoshida, K. T., et al. (2024). Design and evaluation of a 3-DoF haptic device for directional shear cues on the forearm. IEEE Trans. Haptics, 17(3), 483-495. scholar
[50] Sarac, M., et al. (2022). Perceived intensities of normal and shear skin stimuli using a wearable haptic bracelet. IEEE RA-L. scholar
[51] Yuan, Y., Che, H., Qin, Y., Huang, B., Yin, Z.-H., Lee, K.-W., Wu, Y., Lim, S.-C., & Wang, X. (2024). Robot Synesthesia: In-hand manipulation with visuotactile sensing. ICRA 2024. arXiv:2312.01853. scholar
[52] Higuera, C., Sharma, A., Bodduluri, C. K., Fan, T., Lancaster, P., Malik, J., Pathak, D., Lambeta, M., & Calandra, R. (2024). Sparsh: Self-supervised touch representations for vision-based tactile sensing. CoRL 2024. scholar
[53] Liu, Q., Cui, Y., Sun, Z., Li, G., Chen, J., & Ye, Q. (2025). VTDexManip: A dataset and benchmark for visual-tactile pretraining and dexterous manipulation with reinforcement learning. ICLR 2025. scholar
[54] Feng, R., Hu, J., Xia, W., Gao, T., Shen, A., Sun, Y., Fang, B., & Hu, D. (2025). AnyTouch: Learning unified static-dynamic representation across multiple visuo-tactile sensors. ICLR 2025. arXiv:2502.12191. scholar
[55] Various. (2025). AnyTouch 2: General optical tactile representation learning for dynamic tactile perception. OpenReview. scholar
[56] Various. (2025). Sensor-invariant tactile representation. OpenReview. scholar
[57] Wu, C., et al. (2025). Canonical 3D tactile representation for visuo-tactile imitation learning. ICRA 2025. #14 scholar
[58] Wang, S., Lambeta, M., et al. (2022). Tacto: A fast, flexible, and open-source simulator for vision-based tactile sensors. IEEE Robotics and Automation Letters. scholar
[59] Si, Z., Zhang, G., Ben, Q., Romero, B., Xian, Z., Liu, C., & Gan, C. (2024). DiffTactile: A physics-based differentiable tactile simulator for contact-rich robotic manipulation. ICLR 2024. scholar
[60] Zhang, C., Xue, Z., Yin, S., Zhao, B., et al. (2025). UniTacHand: Unified spatio-tactile representation for human to robotic hand skill transfer. arXiv preprint. arXiv:2512.21233. #16 scholar
[61] Open X-Embodiment Collaboration. (2024). Open X-Embodiment: Robotic learning datasets and RT-X models. ICRA 2024. arXiv:2310.08864. scholar
[62] Various. (2024). Tactile sensing for dexterous manipulation: Taxonomies, datasets, and sim-to-real transfer. Journal of Multidisciplinary Engineering Science. scholar
[63] Yang, F., et al. (2023). Touch and Go: Learning from human-collected vision and touch. ICCV 2023. scholar
[64] Yang, F., et al. (2024). Touch100k: A large-scale touch-language-vision dataset. arXiv preprint. arXiv:2406.03813. scholar
[65] Gao, R., et al. (2022). ObjectFolder 2.0: A multisensory object dataset for sim2real transfer. ICML 2022. scholar
[66] Various. (2025). EgoDex: Large-scale egocentric hand manipulation dataset via Apple Vision Pro. arXiv preprint. scholar
[67] Choi, H., Hou, Y., Song, S., & Cutkosky, M. R. (2025). UMI-FT: Learning compliant manipulation at scale with force/torque sensing. arXiv preprint. scholar
[68] Choi, H. (2026). Multimodal Data for Robot Manipulation. SNU Data Science Seminar. scholar
[69] Bicchi, A. (2000). Hands for dexterous manipulation and robust grasping: A difficult road toward simplicity. IEEE Transactions on Robotics and Automation, 16(6), 652-662. scholar
[70] Bicchi, A., & Kumar, V. (2000). Robotic grasping and contact: A review. IEEE ICRA 2000. scholar
[71] Zhao, Z., Li, W., Li, Y., et al. (2025). Embedding high-resolution touch across robotic hands enables adaptive human-like grasping. Nature Machine Intelligence. https://doi.org/10.1038/s42256-025-01053-3 #39 scholar
[72] Richardson, B. A., Grüninger, F., Mack, L., Stueckler, J., & Kuchenbecker, K. J. (2025). ISyHand: A dexterous multi-finger robot hand with an articulated palm. IEEE-RAS Humanoids 2025. arXiv:2509.26236. scholar
[73] Christoph, C. C., Eberlein, M., Katsimalis, F., Roberti, A., Sympetheros, A., Vogt, M. R., Liconti, D., Yang, C., Cangan, B. G., Hinchet, R. J., & Katzschmann, R. K. (2025). ORCA: An open-source, reliable, cost-effective, anthropomorphic robotic hand. arXiv preprint. arXiv:2504.04259. scholar
[74] Kim, U., Jung, D., Jeong, H., Park, J., Jung, H.-M., Cheong, J., Choi, H. R., Do, H., & Park, C. (2021). Integrated linkage-driven dexterous anthropomorphic robotic hand. Nature Communications, 12, 7177. https://doi.org/10.1038/s41467-021-27261-0 scholar
[75] Kadalagere Sampath, et al. (2023). Review on human-like robot manipulation using dexterous hands. Cognitive Computation and Systems (IET/Wiley). scholar
[76] Various. (2025). Human-like dexterous manipulation for anthropomorphic five-fingered hands: A review. Journal of Engineering Science and Technology Review. https://doi.org/10.1016/j.jestch.2025.101938 scholar
[77] Various. (2025). Soft robotic dexterous hands: Advances and challenges. International Journal of Advanced Manufacturing and Mechatronic. scholar
[78] Handa, A., et al. (2023). DeXtreme: Transfer of agile in-hand manipulation from simulation to reality. ICRA 2023. scholar
[79] Wei, Z., Xu, Z., Guo, J., Hou, Y., Gao, C., Cai, Z., Luo, J., & Shao, L. (2025). D(R,O) Grasp: A unified representation of robot and object interaction for cross-embodiment dexterous grasping. ICRA 2025 (Best Paper). scholar
[80] Lin, L., Patel, S., Moon, J., Lazebnik, S., & Jain, U. (2026). CRAFT: A tendon-driven hand with hybrid hard-soft compliance. arXiv preprint. arXiv:2603.12120. scholar
[81] Zhang, Z., Han, T., Pan, J., & Wang, Z. (2025). CATCH-919 Hand: A 9-actuator 19-DOF anthropomorphic robotic hand. arXiv preprint. scholar
[82] Yu, M., Jiang, Y., Chen, C., Jia, Y., & Li, X. (2025). RGMC Champion: Kinematic trajectory optimization for in-hand manipulation. IEEE Robotics and Automation Letters. scholar
[83] Li, H., Ford, C. J., Bianchi, M., Catalano, M. G., Psomopoulou, E., & Lepora, N. F. (2022). BRL/Pisa/IIT SoftHand: A low-cost, 3D-printed, underactuated, tendon-driven hand with soft and adaptive synergies. IEEE Robotics and Automation Letters, 7(4), 8745-8751. scholar
[84] Catalano, M. G., Grioli, G., Farnioli, E., Serio, A., Piazza, C., & Bicchi, A. (2014). Adaptive synergies for the design and control of the Pisa/IIT SoftHand. International Journal of Robotics Research, 33(5), 768-782. https://doi.org/10.1177/0278364913518998 scholar
[85] Della Santina, C., Piazza, C., Grioli, G., Catalano, M. G., & Bicchi, A. (2018). Toward dexterous manipulation with augmented adaptive synergies: The Pisa/IIT SoftHand 2. IEEE Transactions on Robotics, 34(5), 1141-1156. scholar
[86] Capsi-Morales, P., Grioli, G., Piazza, C., Bicchi, A., & Catalano, M. G. (2020). Exploring the role of palm concavity and adaptability in soft synergistic robotic hands. IEEE Robotics and Automation Letters, 5(3), 4703-4710. scholar
[87] Kakogawa, A., Nishimura, H., & Ma, S. (2016). Underactuated modular finger with pull-in mechanism for a robotic gripper. IEEE International Conference on Robotics and Biomimetics (ROBIO), 556-561. scholar
[88] Odhner, L. U., Jentoft, L. P., Claffee, M. R., Corson, N., Tenzer, Y., Ma, R. R., Buehler, M., Kohout, R., Howe, R. D., & Dollar, A. M. (2014). A compliant, underactuated hand for robust manipulation. International Journal of Robotics Research, 33(5), 736-752. scholar
[89] Fu, J., Yu, Z., Guo, Q., Zheng, L., & Gan, D. (2023). A variable stiffness robotic gripper based on parallel beam with vision-based force sensing for flexible grasping. Robotica. https://doi.org/10.1017/S026357472300156X scholar
[90] Al Abeach, L. A. T., Nefti-Meziani, S., & Davis, S. (2017). Design of a variable stiffness soft dexterous gripper. Soft Robotics, 4(3), 274-284. scholar
[91] Wang, W., & Ahn, S.-H. (2017). Shape memory alloy-based soft gripper with variable stiffness for compliant and effective grasping. Soft Robotics, 4(4), 379-389. scholar
[92] Wei, Y., Chen, Y., Ren, T., Chen, Q., Yan, C., Yang, Y., & Li, Y. (2016). A novel, variable stiffness robotic gripper based on integrated soft actuating and particle jamming. Soft Robotics, 3(3), 134-143. scholar
[93] Kim, Y., Shin, J., Won, J., Lee, W., & Seo, T. (2024). LBH gripper: Linkage-belt based hybrid adaptive gripper design for dish collecting robots. Robotics and Autonomous Systems, 185, 104886. scholar
[94] Wang, H., Gao, B., Zhao, D., & Shen, H. (2025). A reconfigurable gripper inspired by elastic belt for versatile in-hand manipulations. IEEE Robotics and Automation Letters. scholar
[95] Lin, J., et al. (2023). A bioinspired bidirectional stiffening soft actuator for multimodal, compliant, and robust grasping. Soft Robotics. https://doi.org/10.1089/soro.2022.0212 scholar
[96] Kopicki, M., Ansary, S. I., Tolomei, S., Angelini, F., Garabini, M., & Skrzypczyński, P. (2025). Underactuated dexterous robotic grasping with reconfigurable passive joints. IEEE Robotics and Automation Letters. arXiv:2501.16006. scholar
[97] Johannsmeier, L., et al. (2025). A process-centric manipulation taxonomy for the organization, classification and synthesis of tactile robot skills. Nature Machine Intelligence. https://doi.org/10.1038/s42256-025-01045-3 scholar
[98] Hogan, N. (1985). Impedance control: An approach to manipulation. JDSMC, 107(1), 1-24. scholar
[99] Romero, J., Tzionas, D., & Black, M. J. (2017). Embodied hands: Modeling and capturing hands and bodies together. SIGGRAPH Asia 2017. scholar
[100] Various. (2024). Stretchable liquid-metal sensor glove. Nature Communications. https://doi.org/10.1038/s41467-024-50101-w scholar
[101] Ruppel, P., et al. (2024). Reduced tactile sensor array for grasp analysis. Sensors. scholar
[102] Xu, M., Zhang, H., Hou, Y., Xu, Z., Fan, L., Veloso, M., & Song, S. (2025). DexUMI: Using human hand as the universal manipulation interface for dexterous manipulation. arXiv preprint. #8 scholar
[103] Si, Z., et al. (2025). ExoStart: From 10 exoskeleton demos to dexterous robot manipulation. #9 scholar
[104] Fang, H.-S., Romero, B., Xie, Y., et al. (2025). DEXOP: A device for robotic transfer of dexterous human manipulation. arXiv preprint. arXiv:2509.04441. #10 scholar
[105] Qin, Y., et al. (2023). AnyTeleop: A general vision-based dexterous robot hand-arm teleoperation system. RSS 2023. scholar
[106] Handa, A., & Van Wyk, K. (2020). DexPilot: Vision-based teleoperation for dexterous manipulation. ICRA 2020. scholar
[107] Ding, Z., et al. (2024). Bunny-VisionPro: Real-time bimanual dexterous teleoperation for imitation learning. arXiv preprint. arXiv:2407.03162. scholar
[108] Wang, C., et al. (2024). DexCap: Scalable and portable mocap data collection system. RSS 2024. scholar
[109] Shaw, K., Bahl, S., & Pathak, D. (2024). Learning dexterity from human hand motion in internet videos. arXiv preprint. arXiv:2212.04498. scholar
[110] Liu, Y., et al. (2025). ImMimic: Large-scale human trajectory + few-shot teleoperation interpolation. scholar
[111] Li, Y., et al. (2024). DexH2R: Task-oriented dexterous manipulation from human to robots. arXiv preprint. scholar
[112] Various. (2025). DOGlove: Low-cost open-source haptic feedback glove. scholar
[113] Various. (2026). Feel Robot Feels: Tactile feedback array glove. scholar
[114] Tang, M., et al. (2025). FSR sensor optimization for grasp classification. IEEE Journal of Biomedical and Health Informatics. scholar
[115] Chen, H., et al. (2025). Capacitive sensor for lift-risk identification. Applied Ergonomics. scholar
[116] Various. (2024). ML-based wearable sensors for real-time hand motion recognition. PMC. (Seoul National University) scholar
[117] TacCap. (2025). TacCap: FBG optical tactile sensor thimble for human and robot fingertips. arXiv preprint, Mar 2025. scholar
[118] VTDexManip. (2025). VTDexManip: A large-scale visual-tactile dataset for dexterous manipulation from human demonstrations. ICLR 2025. scholar
[119] Fang, J., et al. (2024). AirExo: Low-cost exoskeletons for learning whole-arm manipulation in the wild. ICRA 2024. scholar
[120] Fang, J., et al. (2025). AirExo-2: Scaling up generalizable manipulation skills via purely kinesthetic demonstrations in the wild. CoRL 2025. scholar
[121] Zhao, Q., et al. (2024). ACE: A cross-platform visual-exoskeleton system for low-cost dexterous teleoperation. CoRL 2024. scholar
[122] NuExo. (2025). NuExo: A 5.2 kg active upper-limb exoskeleton for dexterous teleoperation with 100% ROM. ICRA 2025. scholar
[123] HumanoidExo. (2025). HumanoidExo: Lightweight exoskeleton with LiDAR for full-body humanoid teleoperation and locomotion learning. arXiv preprint, Oct 2025. scholar
[124] EgoDex. (2025). EgoDex: Learning dexterous manipulation from large-scale egocentric hand data via Apple Vision Pro. Apple, 2025. scholar
[125] Chi, C., Feng, S., Du, Y., Xu, Z., Cousineau, E., Burchfiel, B., & Song, S. (2023). Diffusion Policy: Visuomotor policy learning via action diffusion. RSS 2023 / IJRR 2024. arXiv:2303.04137. scholar
[126] Zhao, T. Z., Kumar, V., Levine, S., & Finn, C. (2023). Learning fine-grained bimanual manipulation with low-cost hardware (ACT/ALOHA). RSS 2023. arXiv:2304.13705. scholar
[127] Various. (2020). OpenAI Dactyl: Solving Rubik's Cube with a robot hand. IJRR. scholar
[128] Yin, Z.-H., et al. (2023). Rotating without seeing: Towards in-hand dexterity through touch. RSS 2023. scholar
[129] Yin, Z.-H., et al. (2024). Learning in-hand translation using a binary 3-axis tactile skin. arXiv preprint. #13 scholar
[130] Pitz, J., et al. (2024). Dextrous tactile in-hand manipulation using modular RL. Humanoid Robots. scholar
[131] Wu, C., et al. (2025). Canonical 3D tactile for visuo-tactile imitation learning. ICRA 2025. #14 scholar
[132] Yuan, Y., et al. (2024). Robot Synesthesia: In-hand manipulation with visuotactile sensing. ICRA 2024. scholar
[133] Yu, J., Liu, H., Yu, Q., et al. (2025). ForceVLA: Enhancing VLA models with a force-aware MoE for contact-rich manipulation. NeurIPS 2025. #1 scholar
[134] Yu, M., et al. (2025). RGMC Champion: Kinematic trajectory optimization. IEEE RA-L. scholar
[135] Helmut, E., Funk, N., Schneider, T., de Farias, C., & Peters, J. (2025). Tactile-conditioned diffusion policy for force-aware robotic manipulation. ICRA 2026. arXiv:2510.13324. scholar
[136] Hao, P., Zhang, C., Li, D., Cao, X., Hao, X., Cui, S., & Wang, S. (2025). TLA: Tactile-language-action model for contact-rich manipulation. arXiv preprint. arXiv:2503.08548. scholar
[137] Oller, M., Berenson, D., & Fazeli, N. (2024). Tactile-driven non-prehensile object manipulation via extrinsic contact mode control. RSS 2024. scholar
[138] Various. (2025). Robust in-hand manipulation with motion-contact planning. arXiv:2505.04978. scholar
[139] Huang, J., Wang, S., Lin, F., Hu, Y., Wen, C., & Gao, Y. (2025). Tactile-VLA: Unlocking vision-language-action model's physical knowledge for tactile generalization. OpenReview. scholar
[140] Various. (2024). Survey of learning-based in-hand manipulation. Frontiers in Robotics and AI. scholar
[141] Various. (2024). Robot intelligent grasping based on tactile perception. Elsevier. scholar
[142] Li, Y., et al. (2025). Dexterous manipulation through imitation learning: A survey. arXiv preprint. arXiv:2504.03515. scholar
[143] Various. (2025). LAPA: Latent action pretraining from human videos. ICLR 2025. scholar
[144] Ye, L., et al. (2025). Visual-tactile self-supervised pretraining for robotic manipulation. Science Robotics. scholar
[145] Zhao, T. Z., et al. (2024). ALOHA Unleashed: A simple recipe for robot dexterity. CoRL 2024. scholar
[146] Adeniji, A., et al. (2025). Feel the Force: Contact-driven learning from humans. arXiv:2506.01944. scholar
[147] Brohan, A., Brown, N., et al. (2023). RT-1: Robotics Transformer for real-world control at scale. RSS 2023. arXiv:2212.06817. scholar
[148] Brohan, A., Brown, N., et al. (2023). RT-2: Vision-Language-Action models transfer web knowledge to robotic control. CoRL 2023. arXiv:2307.15818. scholar
[149] Kim, M. J., Pertsch, K., Karamcheti, S., et al. (2024). OpenVLA: An open-source Vision-Language-Action model. arXiv:2406.09246. scholar
[150] Octo Model Team. (2024). Octo: An open-source generalist robot policy. arXiv:2405.12213. scholar
[151] Physical Intelligence. (2024). pi0: A Vision-Language-Action flow model for general robot control. arXiv:2410.24164. #2 scholar
[152] Google DeepMind. (2025). Gemini Robotics: Bringing AI into the physical world. arXiv:2503.20020. scholar
[153] Lipman, Y., Chen, R. T. Q., Ben-Hamu, H., Nickel, M., & Le, M. (2023). Flow matching for generative modeling. ICLR 2023. arXiv:2210.02747. scholar
[154] Various. (2026). VLA systematic review. Information Fusion (Elsevier). https://doi.org/10.1016/j.inffus.2025.103148. scholar
[155] Various. (2025). What matters in building VLA models. Nature Machine Intelligence. https://doi.org/10.1038/s42256-025-01168-7. scholar
[156] Various. (2025). Diffusion models for robotic manipulation survey. Frontiers. https://doi.org/10.3389/frobt.2025.1606247. scholar
[158] NVIDIA. (2025). GR00T N1: Open humanoid foundation model. scholar
[159] NVIDIA. (2026). GR00T N1.6: Added reasoning via Cosmos Reason. scholar
[160] Figure AI. (2025). Helix: VLA for full humanoid upper body (35 DoF). scholar
[161] Physical Intelligence. (2025). pi0 human-to-robot transfer: Human co-finetuning for generalization. Technical report. scholar
[162] Various. (2025). EgoVLA: Egocentric human video pretraining for robot VLA. arXiv preprint. scholar
[163] Various. (2025). PhysBrain: Physical world fine-tuning of VLMs via 3M VQA from Ego4D. arXiv preprint. scholar
[164] Si, Z., Qian, K., Sontakke, N., et al. (2025). ExoStart: Efficient learning for dexterous manipulation with sensorized exoskeleton demonstrations. arXiv preprint. #9 scholar
[165] Dan, Y., et al. (2025). X-Sim: Real-to-Sim-to-Real pipeline. scholar
[167] Various. (2024). TacEx: GelSight simulation in Isaac Sim. scholar
[168] Various. (2025). Sim-to-real reinforcement learning for vision-based dexterous manipulation on humanoids. arXiv preprint. arXiv:2502.20396. scholar
[169] Various. (2025). Human-in-the-loop RL for precise dexterous manipulation. Science Robotics. https://doi.org/10.1126/scirobotics.ads5033. scholar
[170] NVIDIA. (2025). GR00T N1: An open foundation model for generalist humanoid robots. arXiv preprint. arXiv:2503.14734. scholar
[171] NVIDIA. (2026). Synthetic data pipeline: 780K trajectories in 11 hours. GTC 2026 Keynote. scholar
[172] Various. (2025). Tactile Robotics: Past and Future. arXiv:2512.01106. scholar
[173] Lipman, Y., et al. (2023). Flow matching for generative modeling. ICLR 2023. scholar
[174] Various. (2025). DexWM: Dexterous world models from human video. arXiv preprint. Meta FAIR. scholar
[175] Qin, Y., et al. (2023). AnyTeleop: A general vision-based dexterous robot teleoperation system. RSS 2023. scholar
[176] Dan, Y., et al. (2025). X-Sim: Cross-embodiment learning via real-to-sim-to-real. CoRL 2025 (Oral). scholar
[177] Romero, J., Tzionas, D., & Black, M. J. (2017). Embodied hands (MANO). SIGGRAPH Asia 2017. #17 scholar
[178] Various. (2025). Human2Sim2Robot: Internet video to robot policy pipeline. scholar
[179] Various. (2025). AnyTouch: Unified static-dynamic tactile representation. arXiv:2502.12191. scholar
[180] Lv, Y., et al. (2025). ManipTrans: Efficient bimanual dexterous manipulation retargeting. CVPR 2025. scholar
[181] Park, J., et al. (2025). Joint motion manifold for human-to-robot hand retargeting. arXiv preprint, Jan 2025. scholar
[182] Chen, B., et al. (2024). Mirage: Cross-embodiment zero-shot transfer via cross-painting. RSS 2024. scholar
[183] Various. (2025). H2R: Human-to-robot video augmentation for policy pretraining. arXiv preprint. scholar
[184] Various. (2025). Masquerade: Human video to robot visual transformation. arXiv preprint. scholar
[185] Kareer, S., et al. (2024). EgoMimic: Scaling imitation learning via egocentric video. arXiv preprint. scholar
[186] NVIDIA. (2026). EgoScale: Scaling robot policy learning with 20K hours of human egocentric data. arXiv preprint, Feb 2026. scholar
[187] Various. (2026). AoE: Augmentation of experience via human ego demonstrations. arXiv preprint, Feb 2026. scholar
[188] Physical Intelligence. (2025). pi0 human-to-robot transfer: Co-finetuning for cross-embodiment generalization. Technical Report, Dec 2025. #2 scholar
[189] Various. (2025). EgoZero: Zero robot data policy learning from smart glasses. arXiv preprint. scholar
[190] Various. (2025). VidBot: Internet video to 3D affordance for zero-shot robot control. CVPR 2025. scholar
[191] Various. (2025). Human2Bot: Task similarity reward from human video for zero-shot robot control. Autonomous Robots, 2025. scholar
[192] Yuan, Y., et al. (2024). Robot Synesthesia. ICRA 2024. scholar
[193] Suresh, S., et al. (2024). NeuralFeels. Science Robotics, 9(86). scholar
[194] Huang, B., et al. (2024). 3D-ViTac. CoRL 2024. scholar
[195] Yu, J., et al. (2025). ForceVLA: Enhancing VLA models with a force-aware MoE for contact-rich manipulation. NeurIPS 2025. #1 scholar
[196] Various. (2025). Tactile-VLA. OpenReview. scholar
[197] Yang, F., et al. (2024). UniTouch. CVPR 2024. scholar
[198] Higuera, C., et al. (2024). Sparsh. CoRL 2024. scholar
[199] Liu, K., et al. (2025). VTV-LLM: Robotic perception with a large tactile-vision-language model. arXiv preprint. arXiv:2506.19303. scholar
[200] Fu, Z., Zhao, T. Z., & Finn, C. (2024). Mobile ALOHA: Learning bimanual mobile manipulation with low-cost whole-body teleoperation. arXiv preprint. arXiv:2401.02117. scholar
[201] Various. (2024). TacEx: GelSight tactile simulation in Isaac Sim. arXiv preprint. arXiv:2411.04776. scholar
[203] Yu, M., et al. (2025). RGMC Champion. IEEE RA-L. scholar
[204] Albini, A., et al. (2025). Tactile data representation review. arXiv (IEEE T-RO). scholar
[205] Open X-Embodiment Collaboration. (2024). ICRA 2024. scholar
[206] Mao, Q., Liao, Z., Yuan, J., & Zhu, R. (2024). Multimodal tactile sensing fused with vision for dexterous robotic housekeeping. Nature Communications, 15, 6871. https://doi.org/10.1038/s41467-024-51261-5 scholar
[207] Various. (2025). Simultaneous tactile-visual perception for learning multimodal robot manipulation. arXiv preprint. arXiv:2512.09851. scholar
[208] Various. (2025). Multimodal fusion and vision-language models: A survey for robot vision. Information Fusion (Elsevier). arXiv:2504.02477. scholar
[209] Various. (2025). Tactile Robotics: An outlook. arXiv preprint. arXiv:2508.11261. scholar
[210] NVIDIA. (2026). GR00T N1.5: Advancing humanoid robot foundation models. GTC 2026. scholar
[211] Figure AI. (2025). Helix: A vision-language-action model for full humanoid control. Company technical report. scholar
[212] Figure AI. (2025). Figure 02 deployment at BMW Spartanburg. Company press release. scholar
[213] Tesla. (2025). Optimus Gen 3 humanoid robot. Tesla AI Day 2025. scholar
[214] Sanctuary AI. (2025). Phoenix Gen 8: Tactile-integrated humanoid. Company technical report. scholar
[215] 1X Technologies. (2025). NEO: Consumer humanoid robot. Company press release. scholar
[216] Agility Robotics. (2025). Digit deployment at Amazon fulfillment centers. Company press release. scholar
[217] Boston Dynamics. (2025). Electric Atlas humanoid robot. Company product announcement. scholar
[218] Apptronik. (2025). Apollo humanoid: Mercedes-Benz factory pilot. Company press release. scholar
[219] Unitree Robotics. (2026). G1 humanoid robot: 5,500+ units shipped. IPO prospectus / Company report. scholar
[220] Wonik Robotics. (2025). Allegro + Digit Plexus. Various. scholar
[221] Hyundai Motor Group. (2025). Robotics investment plan: KRW 125.2 trillion. Company investor report. scholar
[222] Samsung Electronics. (2025). Rainbow Robotics acquisition and Future Robotics Office. Company press release. scholar
[223] Markets and Markets. (2025). Humanoid robot market: $2.9B (2025) to $15.3B (2030). Market research report. scholar
[224] Goldman Sachs. (2025). Humanoid robot shipment forecast: 250K+ units by 2030. Equity research report. scholar
[226] Various. (2023). DeXtreme. ICRA 2023. scholar
[227] Bhirangi, R., et al. (2024). AnySkin. ICRA 2025. scholar
[228] Various. (2025). NRE-skin. PNAS. scholar
[229] Various. (2026). Bioinspired spiking architecture. Nature Communications. scholar
[230] Physical Intelligence. (2025). pi0.5/RECAP: Post-deployment RL for continuous improvement. arXiv preprint. arXiv:2504.16932. #4 scholar
[231] Shaw, K., et al. (2024). Learning from internet videos. CMU. scholar
[232] NVIDIA. (2026). 780K trajectories in 11 hours. GTC 2026. scholar
[235] Bicchi, A. (2000). Hands for dexterous manipulation. IEEE T-RA. scholar
[236] Billard, A., & Kragic, D. (2019). Trends and challenges. Science. scholar
[237] Various. (2026). VLA systematic review. Information Fusion. scholar
[238] Various. (2025). What matters in building VLA models. Nature MI. scholar
[239] Hogan, N. (1985). Impedance control. JDSMC. scholar
[240] Bansal, A., et al. (2026). EgoScale: Scaling laws for egocentric human data in robot learning. arXiv preprint. NVIDIA Research. scholar
[241] Rishabh, A., et al. (2025). X-Sim: Cross-embodiment simulation for robot learning. CoRL 2025 (Oral). Cornell University. scholar
[242] Bahl, S., et al. (2025). VidBot: Learning robot policies from internet videos. CVPR 2025. TU Munich. scholar
[243] Wang, Y., et al. (2025). EgoZero: Robot learning from smart glasses demonstrations. arXiv preprint. scholar
[244] Apple ML Research. (2025). EgoDex: Learning dexterous manipulation from large-scale egocentric video. 829 hours, 90M frames. arXiv preprint. scholar
[245] Grauman, K., et al. (2022). Ego4D: Around the world in 3,000 hours of egocentric video. CVPR 2022. Meta AI. 3,670 hours, 931 participants. scholar
Acknowledgment
This book is a comprehensive survey of tactile sensing for dexterous robot hand manipulation, tracing the research evolution from sensor technology to VLA models and sim-to-real transfer.
This project was built using the Harness skill by Minho Hwang.
AI tools were used in the production of this work: Claude (Opus 4.6) for literature survey, content generation, and manuscript preparation.