Help
RSS
API
Feed
Maltego
Contact
Domain > mkhangg.com
×
More information on this domain is in
AlienVault OTX
Is this malicious?
Yes
No
DNS Resolutions
Date
IP Address
2025-09-25
13.225.143.46
(
ClassC
)
2026-01-04
3.169.173.69
(
ClassC
)
Port 80
HTTP/1.1 301 Moved PermanentlyServer: CloudFrontDate: Sun, 04 Jan 2026 04:13:07 GMTContent-Type: text/htmlContent-Length: 167Connection: keep-aliveLocation: https://mkhangg.com/X-Cache: Redirect from cloudfrontVia: 1.1 cb2339b8008ceeabfc2dd9e6cfbc465c.cloudfront.net (CloudFront)X-Amz-Cf-Pop: HIO52-P4X-Amz-Cf-Id: Qs6LM8_HLKWt7zftBUMma3TA99mAEuTez5odYsN0KMnaNQQ4JUYEuA html>head>title>301 Moved Permanently/title>/head>body>center>h1>301 Moved Permanently/h1>/center>hr>center>CloudFront/center>/body>/html>
Port 443
HTTP/1.1 200 OKContent-Type: text/htmlContent-Length: 92285Connection: keep-aliveLast-Modified: Fri, 19 Dec 2025 10:45:02 GMTx-amz-server-side-encryption: AES256Accept-Ranges: bytesServer: AmazonS3Date: Sat, 03 Jan 2026 16:09:35 GMTETag: b33ddced38e0519f8444cd31578d66ddX-Cache: Hit from cloudfrontVia: 1.1 5ec2b95241693f962e2ff4afc726b38e.cloudfront.net (CloudFront)X-Amz-Cf-Pop: HIO52-P4X-Amz-Cf-Id: mwmohhq8QfW-JYYayyfhA0hltf9KljSS4XELOapIprpYOq2JZ6eAGwAge: 43413 !DOCTYPE html>html langen> head> !-- Metadata --> meta charsetutf-8/> meta nameviewport contentwidthdevice-width, initial-scale1, shrink-to-fitno/> meta namedescription contentwebsite/> meta nameauthor contentkhang nguyen/> title>khang nguyen/title> link relicon typeimage/x-icon hrefassets/img/mandu_icon.png/> !-- Font Awesome icons --> script srchttps://use.fontawesome.com/releases/v5.15.3/js/all.js>/script> !-- Google fonts--> link relstylesheet hrefhttps://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css> link relstylesheet hrefhttps://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css> link hrefhttps://fonts.googleapis.com/css?familySaira+Extra+Condensed:500,700 relstylesheet typetext/css/> link hrefhttps://fonts.googleapis.com/css?familyMuli:400,400i,800,800i relstylesheet typetext/css/> link hrefhttps://fonts.googleapis.com/css2?familyFira+Code&displayswap relstylesheet> link relstylesheet hrefhttps://maxcdn.bootstrapcdn.com/font-awesome/4.4.0/css/font-awesome.min.css> link relstylesheet hrefhttps://cdnjs.cloudflare.com/ajax/libs/OwlCarousel2/2.3.4/assets/owl.carousel.min.css> link relstylesheet hrefhttps://cdnjs.cloudflare.com/ajax/libs/OwlCarousel2/2.3.4/assets/owl.theme.default.min.css> !-- Core theme CSS --> link hrefstyles/styles.css relstylesheet/> /head> body> !-- Moving particles --> canvas idcanvas>/canvas> !-- Progress bar on top --> div classprogress-bar-container> div classprogress-bar idprogressBar>/div> /div> !-- Back to top button --> a idback-to-top-button>/a> !-- Toggle dark/light theme button --> button classtoggle-theme-button onclicktoggleTheme()>☀️/button> !-- Assitant icon saying about theme changes --> div classpopup-icon-container idpopupIconContainer draggabletrue> div classicon>img srcassets/img/mandu_icon.png width65 height65>/div> div classspeech-balloon>/div> /div> !-- Dismissal area for assistant icon --> div classdismissal-area iddismissalArea>✖/div> !-- Navigation bar --> !-- Fusion of jQuery slidebar (https://codepen.io/BeshoyRomany/pen/qmNPwN) and animated hamburger menu (https://codepen.io/amberweinberg/pen/yeqJgG) --> nav> a classnav-toggle-btn onclicktoggleNav()> span>/span> span>/span> span>/span> /a> ul> li>p onclickscrollToTopDiv(html)>span classemoji>🏡/span> span classtext>home/span>/p>/li> li>p onclickscrollToTopDiv(#updates)>span classemoji>📜/span> span classtext>updates/span>/p>/li> li>p onclickscrollToTopDiv(#research)>span classemoji>📚/span> span classtext>publications/span>/p>/li> li>p onclickscrollToTopDiv(#outreach)>span classemoji>🧩/span> span classtext>outreach/span>/p>/li> li>p onclickscrollToTopDiv(#resources)>span classemoji>⛏️/span> span classtext>resources/span>/p>/li> li>p onclickscrollToTopDiv(#gallery)>span classemoji>🖼️/span> span classtext>gallery/span>/p>/li> /ul> /nav> !-- Content --> div classcontainer stylepadding-top: 3rem;> !-- About section -->div classrow mb-4> div classcol-lg-2 col-md-4> div classring-container> div classring> div classhollow-ring> img classprofile-image srcassets/img/shanghai_me.png altkhang nguyen /> div classemoji-indicator> 🍀 span classhover-text> four-leaf clover /span> /div> /div> /div> /div> hr /> div classsocial-icons> a classsocial-icon hrefhttps://scholar.google.com/citations?userZ6_5ZTEAAAAJ target_blank relnoopener titleGoogle Scholar>i classfa fa-graduation-cap stylefont-size: 35px; color: #4285f4>/i>/a> a classsocial-icon hrefhttps://www.researchgate.net/profile/Khang-Nguyen-133 target_blank relnoopener titleResearchGate>i classfab fa-researchgate stylefont-size: 35px; color: #00D0BB>/i>/a> a classsocial-icon hrefhttps://github.com/mkhangg target_blank relnoopener titleGitHub>i classfab fa-github stylefont-size: 35px; color: #171515>/i>/a> a classsocial-icon hrefhttps://www.youtube.com/@_m.khangg_ target_blank relnoopener titleYouTube>i classfab fa-youtube stylefont-size: 35px; color: #FF0000>/i>/a> a classsocial-icon hrefassets/doc/Resume_KhangNguyen.pdf target_blank relnoopener titleResume>i classfas fa-file-alt stylefont-size: 35px; color: #bd5d38>/i>/a> /div> div classsocial-icons> a classsocial-icon hrefhttps://x.com/mkhangg target_blank relnoopener titleTwitter/X>i classfab fa-twitter stylefont-size: 35px; color: #1DA1F2>/i>/a> a classsocial-icon hrefhttps://orcid.org/0000-0003-3471-5533 target_blank relnoopener titleORCID>i classfab fa-orcid stylefont-size: 35px; color: #a6ce39>/i>/a> a classsocial-icon onclickscrollToTopDiv(#research); titleProjects>i classfa fa-shapes stylefont-size: 35px; color: #e8828c>/i>/a> a classsocial-icon hrefhttps://maps.app.goo.gl/8GaYnGrRvXgbP8VV8 target_blank relnoopener titleLocation>i classfa fa-map-marker-alt stylefont-size: 35px; color: #635050>/i>/a> a classsocial-icon idcontact-card-trigger titleContact Info>i classfa fa-id-card stylefont-size: 35px; color: #cbbb5f>/i>/a> /div> p>/p> /div> !-- Inspired by Thanh Tran -- https://codepen.io/thanhrossi/pen/pvOEzq --> !-- Re-written, redesigned, and integrated (from HTML (Pug) and CSS (Less) to pure HTML and CSS) --> div classcontact-card-overlay idoverlay-bg> div classinformation_card> div idfront_end_card> div classfront> div classcontact-info-card> div classcode-line>span classline-number>1/span>span classcode-content>span classkeyword>class/span> span classnamespace>ContactInformationCard/span>: /span>/div> div classcode-line>span classline-number>2/span>span classcode-content> span classkeyword>def/span> span classfunction>__init__/span>(span classself>self/span>):/span>/div> div classcode-line>span classline-number>3/span>span classcode-content> span classself>self/span>.span classself>dept/span> span classstring>robotics @ mbzuai/span> /span>/div> div classcode-line>span classline-number>4/span>span classcode-content> span classself>self/span>.span classself>lab/span> span classstring>netbot lab/span> /span>/div> div classcode-line>span classline-number>5/span>span classcode-content> span classself>self/span>.span classself>email/span> span classstring>khang.nguyen@mbzuai.ac.ae/span> /span>/div> div classcode-line>span classline-number>6/span>span classcode-content> span classself>self/span>.span classself>phone/span> span classstring>(+971) 56 937 2539/span> /span>/div> div classcode-line>span classline-number>7/span>span classcode-content>/span>/div> div classcode-line>span classline-number>8/span>span classcode-content> span classkeyword>def/span> span classfunction>flipCard/span>(span classself>self/span>):/span>/div> div classcode-line>span classline-number>9/span>span classcode-content> span classfunction>print/span>(span classstring>tap on the card to flip./span>)/span>/div> div classcode-line>span classline-number>10/span>span classcode-content>/span>/div> div classcode-line>span classline-number>11/span>span classcode-content> span classkeyword>def/span> span classfunction>closeCard/span>(span classself>self/span>):/span>/div> div classcode-line>span classline-number>12/span>span classcode-content> span classfunction>print/span>(span classstring>tap outside to close it./span>)/span>/div> /div> /div> div classback> div classcontact-info-card> p classcard-name>khang nguyen/p> a hrefhttps://mkhangg.com/ target_blank relnoopener classcard-website-link titleWebsite>mkhangg.com/a> /div> /div> /div> /div> /div> div classcol-lg-10 col-md-8> h2> khang nguyen span classtext-primary> /kʰæŋ/ /span> span idvolumeEmoji rolebutton> 🎧 /span> /h2> p>/p> I am currently a research assistant in the hightlight>Department of Robotics/hightlight> of the a hrefhttps://mbzuai.ac.ae/ target_blank relnoopener>Mohamed bin Zayed University of Artificial Intelligence/a>, working at the a hrefhttps://ix0tuaezs81enc3q4po8va.on.drv.tw/www.networked-robot.online/ target_blank relnoopener>NetBot Lab/a> under a hrefhttps://mbzuai.ac.ae/study/faculty/dezhen-song/ target_blank relnoopener>Dr. Dezhen Song/a>. Prior to this, I spent time in both a hrefhttps://www.uta.edu/academics/schools-colleges/engineering/academics/departments/cse target_blank relnoopener>Department of Computer Science and Engineering/a> and a hrefhttps://www.uta.edu/academics/schools-colleges/engineering/academics/departments/bioengineering target_blank relnoopener>Department of Bioengineering/a> of the a hrefhttps://www.uta.edu/ target_blank relnoopener>University of Texas at Arlington/a>, where I was at the hightlight>Learning and Adaptive Robotics Lab/hightlight> under a hrefhttps://www.uta.edu/academics/faculty/profile?usernamehuber target_blank relnoopener>Dr. Manfred Huber/a> and at the a hrefhttps://www.uta.edu/academics/schools-colleges/engineering/research/centers-and-labs/mmin target_blank relnoopener>Multimodal Imaging and Neuromodulation Lab/a> under a hrefhttps://www.uta.edu/academics/faculty/profile?userhanli target_blank relnoopener>Dr. Hanli Liu/a>. p>/p> I grew up in a hrefhttps://en.wikipedia.org/wiki/Saigon target_blank relnoopener>Saigon, Vietnam/a>, and was fortunate to spend my most memorable time at the a hrefhttps://en.wikipedia.org/wiki/VNU-HCM_High_School_for_the_Gifted target_blank relnoopener>VNU-HCM High School for the Gifted/a> (informatics program of 2020) and previously a hrefhttps://en.wikipedia.org/wiki/Tr%E1%BA%A7n_%C4%90%E1%BA%A1i_Ngh%C4%A9a_High_School_for_the_Gifted target_blank relnoopener>TĐN Secondary School for the Gifted/a> (mathematics program of 2017). p>/p> div classcodebox> From collaborative robots to humanoids, my research focuses on exploring explainable learning paradigms for vision-tactile and whole-body loco-manipulation. Two core directions are (i) multimodal representation learning to advance robotic manipulation skills, and (ii) long-horizon planning and control for full-body loco-manipulation tasks, where these are guided by neuromorphic mechanisms (i.e., metacognition, self-inference, and introspection) through verifiable signifiers in diverse unstructured settings under real-world uncertainty and stochasticity. p>/p> b>Vision-Tactile Manipulation Planning:/b> multimodal representation learning, manipulative vision-tactile skills br /> b>Whole-Body Loco-Manipulation:/b> learning for bipedal locomotion, long-horizon loco-manipulation skills br /> /div> /div>/div>!-- Updates section -->hr />div classrow idupdates> div classcol> h2 clssmb-5>📜 updates/h2> p>/p> div classowl-carousel owl-theme> div classnews-card> img srcassets/img/updates_gan/talk_boy_4.jpg classw-full rounded-lg /> div classnews-desc>I served as a speaker at hightlight>Vietnam Symposium on Robotics/hightlight>!/div> div classnews-time>December 2025/div>/div> div classnews-card> img srcassets/img/updates_gan/team_robot.jpg classw-full rounded-lg /> div classnews-desc>hightlight>DoublyAware/hightlight> is accepted to hightlight>IEEE RA-L SI on Legged Robots/hightlight>!/div> div classnews-time>November 2025/div>/div> div classnews-card> img srcassets/img/updates_gan/reading_boy_8.jpg classw-full rounded-lg /> div classnews-desc>I am invited to review papers at hightlight>ICLR 2026/hightlight> and hightlight>IEEE RA-L/hightlight>./div> div classnews-time>October 2025/div>/div> div classnews-card> img srcassets/img/updates_gan/media_lpm.jpg classw-full rounded-lg /> div classnews-desc>Our work on hightlight>LPM/hightlight> is featured on the hightlight>UTokyo Press Release/hightlight>./div> div classnews-time>October 2025/div>/div> div classnews-card> img srcassets/img/updates_gan/hangzhou_boy.jpg classw-full rounded-lg /> div classnews-desc>I will attend IROS 2025 in hightlight>Hangzhou, China/hightlight>./div> div classnews-time>October 2025/div>/div> div classnews-card> img srcassets/img/updates_gan/talk_boy_3.jpg classw-full rounded-lg /> div classnews-desc>I gave a mini-talk at the weekly hightlight>RoboCoffee/hightlight> seminar at hightlight>MBZUAI/hightlight>./div> div classnews-time>October 2025/div>/div> div classnews-card> img srcassets/img/updates_gan/reading_boy_7.jpg classw-full rounded-lg /> div classnews-desc>I am invited to review papers at hightlight>ICRA 2026/hightlight> and hightlight>IEEE RA-L/hightlight>./div> div classnews-time>September 2025/div>/div> div classnews-card> img srcassets/img/updates_gan/talk_boy_2.jpg classw-full rounded-lg /> div classnews-desc>I am pleased to be invited to give a virtual talk at hightlight>PiMA/hightlight> in Vietnam./div> div classnews-time>August 2025/div>/div> div classnews-card> img srcassets/img/updates_gan/mbzuai_boy.jpg classw-full rounded-lg /> div classnews-desc>I am officially on board for my hightlight>research position/hightlight> at hightlight>MBZUAI/hightlight>!/div> div classnews-time>July 2025/div>/div> div classnews-card> img srcassets/img/updates_gan/talk_boy.jpg classw-full rounded-lg /> div classnews-desc>I am pleased to be invited to give a talk at hightlight>VinRobotics/hightlight> in Vietnam./div> div classnews-time>June 2025/div>/div> div classnews-card> img srcassets/img/updates_gan/chinese_bots.jpg classw-full rounded-lg /> div classnews-desc>hightlight>FlowMP/hightlight> and hightlight>Liquid Pouch Actuator/hightlight> are accepted to IROS 2025!/div> div classnews-time>June 2025/div>/div> div classnews-card> img srcassets/img/updates_gan/reading_boy_6.jpg classw-full rounded-lg /> div classnews-desc>I am invited to review papers at hightlight>Humanoids 2025/hightlight> and hightlight>IEEE T-RO/hightlight>./div> div classnews-time>May 2025/div>/div> div classnews-card> img srcassets/img/updates_gan/atlanta_boy.jpg classw-full rounded-lg /> div classnews-desc>I will attend ICRA 2025 in hightlight>Atlanta, Georgia/hightlight>./div> div classnews-time>May 2025/div>/div> div classnews-card> img srcassets/img/updates_gan/reading_boy_5.jpg classw-full rounded-lg /> div classnews-desc>I am invited to review papers at hightlight>IROS 2025/hightlight>./div> div classnews-time>April 2025/div>/div> div classnews-card> img srcassets/img/updates_gan/reading_boy_4.jpg classw-full rounded-lg /> div classnews-desc>I am invited to review papers at hightlight>IEEE RA-L/hightlight> and hightlight>Neurocomputing/hightlight>./div> div classnews-time>February 2025/div>/div> div classnews-card> img srcassets/img/updates_gan/programming_class.jpg classw-full rounded-lg /> div classnews-desc>I will be the GTA for hightlight>CSE 1310 (Programming) course/hightlight> this semester!/div> div classnews-time>January 2025/div>/div> div classnews-card> img srcassets/img/updates_gan/robot_celebrate.jpg classw-full rounded-lg /> div classnews-desc>Our paper on hightlight>distortion-aware images/hightlight> is featured at VISAPP 2025./div> div classnews-time>December 2024/div>/div> div classnews-card> img srcassets/img/updates_gan/bot_bme.jpg classw-full rounded-lg /> div classnews-desc>Our poster is presented at the hightlight>North Texas BME Symposium/hightlight> at UTSW./div> div classnews-time>November 2024/div>/div> div classnews-card> img srcassets/img/updates_gan/reading_boy_3.jpg classw-full rounded-lg /> div classnews-desc>I am invited to review papers at hightlight>ICRA 2025/hightlight>, hightlight>IEEE TNNLS/hightlight>, and hightlight>IEEE RA-L/hightlight>./div> div classnews-time>October 2024/div>/div> div classnews-card> img srcassets/img/updates_gan/boy_cochair.jpg classw-full rounded-lg /> div classnews-desc>I will serve as co-chair for the hightlight>Object Detection session/hightlight> at IROS 2024./div> div classnews-time>October 2024/div>/div> div classnews-card> img srcassets/img/updates_gan/abudhabi_boy.jpg classw-full rounded-lg /> div classnews-desc>I will attend IROS 2024 in hightlight>Abu Dhabi, UAE/hightlight>./div> div classnews-time>October 2024/div>/div> div classnews-card> img srcassets/img/updates_gan/fix_robot.jpg classw-full rounded-lg /> div classnews-desc>I will be the GTA for hightlight>CSE 4360 (Robotics) course/hightlight> this semester!/div> div classnews-time>August 2024/div>/div> div classnews-card> img srcassets/img/updates_gan/scanner_bots.jpg classw-full rounded-lg /> div classnews-desc>hightlight>V3D-SLAM/hightlight> and hightlight>Refined PanMap/hightlight> papers are accepted to IROS 2024!/div> div classnews-time>June 2024/div>/div> div classnews-card> img srcassets/img/updates_gan/lab_boy.jpg classw-full rounded-lg /> div classnews-desc>I will continue my hightlight>PhD studies in robotics/hightlight> at the LEARN Lab./div> div classnews-time>May 2024/div>/div> div classnews-card> img srcassets/img/updates_gan/graduation_boy.jpg classw-full rounded-lg /> div classnews-desc>I finished my hightlight>undergrad studies in computer science/hightlight> at UTA./div> div classnews-time>May 2024/div>/div> div classnews-card> img srcassets/img/updates_gan/city_boy.jpg classw-full rounded-lg /> div classnews-desc>I will attend IROS 2023 in hightlight>Detroit, Michigan/hightlight>./div> div classnews-time>October 2023/div>/div> div classnews-card> img srcassets/img/updates_gan/travel_boy.jpg classw-full rounded-lg /> div classnews-desc>I will present at ISR 2023 in hightlight>Stuttgart, Germany/hightlight>./div> div classnews-time>September 2023/div>/div> div classnews-card> img srcassets/img/updates_gan/holding_can_robot.jpg classw-full rounded-lg /> div classnews-desc>The hightlight>deformable object classification/hightlight> paper is accepted to ISR 2023!/div> div classnews-time>June 2023/div>/div> div classnews-card> img srcassets/img/updates_gan/calibrating_robot.jpg classw-full rounded-lg /> div classnews-desc>The hightlight>multiplanar self-calibration/hightlight> paper is accepted to IROS 2023!/div> div classnews-time>June 2023/div>/div> div classnews-card> img srcassets/img/updates_gan/fireworks_robot.jpg classw-full rounded-lg /> div classnews-desc>My hightlight>thesis proposal/hightlight> is approved by the Honors College!/div> div classnews-time>June 2023/div>/div> div classnews-card> img srcassets/img/updates_gan/reading_boy_2.jpg classw-full rounded-lg /> div classnews-desc>I am invited to review papers at hightlight>CASE 2023/hightlight>./div> div classnews-time>April 2023/div>/div> div classnews-card> img srcassets/img/updates_gan/happy_robot.jpg classw-full rounded-lg /> div classnews-desc>The hightlight>PerFC/hightlight> paper is accepted to FLAIRS-36!/div> div classnews-time>March 2023/div>/div> div classnews-card> img srcassets/img/updates_gan/reading_boy.jpg classw-full rounded-lg /> div classnews-desc>I am invited to review papers at hightlight>UR 2023/hightlight>./div> div classnews-time>February 2023/div>/div> div classnews-card> img srcassets/img/updates_gan/spider_bot.jpg classw-full rounded-lg /> div classnews-desc>hightlight>Spidey/hightlight> won sponsorship prizes at HackMIT 2022!/div> div classnews-time>October 2022/div>/div> div classnews-card> img srcassets/img/updates_gan/boy_robot.jpg classw-full rounded-lg /> div classnews-desc>I joined the hightlight>Learning and Adaptive Robotics (LEARN) Lab/hightlight> at UTA./div> div classnews-time>August 2022/div>/div> div classnews-card> img srcassets/img/updates_gan/happy_tree.jpg classw-full rounded-lg /> div classnews-desc>The hightlight>IoTree/hightlight> paper is accepted to MobiCom 2022!/div> div classnews-time>June 2022/div>/div> div classnews-card> img srcassets/img/updates_gan/farm_bot.jpg classw-full rounded-lg /> div classnews-desc>hightlight>iPlanter/hightlight> won prizes at GaTech RoboTech Hackathon 2022!/div> div classnews-time>April 2022/div>/div> div classnews-card> img srcassets/img/updates_gan/boy_systems.jpg classw-full rounded-lg /> div classnews-desc>I joined the hightlight>Wireless and Sensor Systems Lab (WSSL)/hightlight> at UTA./div> div classnews-time>August 2021/div>/div> div classnews-card> img srcassets/img/updates_gan/boy_school.jpg classw-full rounded-lg /> div classnews-desc>I start my undergrad at the hightlight>University of Texas at Arlington (UTA)/hightlight>./div> div classnews-time>August 2020/div>/div> /div> p>/p> /div>/div>!-- Research section -->hr />div classrow idresearch> div classcol> h2 clssmb-5>📚 publications/h2> p>/p> div idfilters-project> button classfilter-button active data-filter*>all/button> button classfilter-button data-filterselected>selected/button> button classfilter-button data-filterperception manipulation>perception + manipulation/button> button classfilter-button data-filterplanning control>planning + control/button> button classfilter-button data-filterslam>localization + mapping/button> button classfilter-button data-filterothers>others/button> /div> p>/p> div idprojects classisotope> div classproject data-filterselected> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_doublyaware.gif /> /div> div classcol-sm-8> b>i>DoublyAware/i>:/b> b>Dual Planning and Policy Awareness for Temporal Difference Learning in Humanoid Locomotion/b> br /> i>a href target_blank relnoopener>IEEE RA-L 2025, Special Issue on Legged Robots/a>/i> br /> u>Khang Nguyen/u>, An Thai Le, Jan Peters, Minh Nhat Vu. br /> a hrefhttps://arxiv.org/pdf/2506.12095 target_blank relnoopener>PDF/a> br /> u>b>i>Abstract/i>/b>:/u> Achieving robust robot learning for humanoid locomotion is a fundamental challenge in model-based reinforcement learning (MBRL), where environmental stochasticity and randomness can hinder efficient exploration and learning stability. The environmental, so-called span classcollapse idmore_doublyaware> aleatoric, uncertainty can be amplified in high-dimensional action spaces with complex contact dynamics, and further entangled with epistemic uncertainty in the models during learning phases. In this work, we propose i>DoublyAware/i>, an uncertainty-aware extension of Temporal Difference Model Predictive Control (TD-MPC) that explicitly decomposes uncertainty into two disjoint interpretable components, i.e., planning and policy uncertainties. To handle the planning uncertainty, i>DoublyAware/i> employs conformal prediction to filter candidate trajectories using quantile-calibrated risk bounds, ensuring statistical consistency and robustness against stochastic dynamics. Meanwhile, policy rollouts are leveraged as structured informative priors to support the learning phase with Group-Relative Policy Constraint (GRPC) optimizers that impose a group-based adaptive trust-region in the latent action space. This principled combination enables the robot agent to prioritize high-confidence, high-reward behavior while maintaining effective, targeted exploration under uncertainty. Evaluated on the HumanoidBench locomotion suite with the Unitree 26-DoF H1-2 humanoid, i>DoublyAware/i> demonstrates improved sample efficiency, accelerated convergence, and enhanced motion feasibility compared to RL baselines. Our simulation results emphasize the significance of structured uncertainty modeling for data-efficient and reliable decision-making in TD-MPC-based humanoid locomotion learning. /span> span> a href#more_doublyaware data-togglecollapse onclicktoggleText(this) idlink-more_doublyaware>... See More/a>/span> /div> /div>/div> div classproject data-filterplanning control> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_doublyaware.gif /> /div> div classcol-sm-8> b>i>DoublyAware/i>:/b> b>Dual Planning and Policy Awareness for Temporal Difference Learning in Humanoid Locomotion/b> br /> i>a href target_blank relnoopener>IEEE RA-L 2025, Special Issue on Legged Robots/a>/i> br /> u>Khang Nguyen/u>, An Thai Le, Jan Peters, Minh Nhat Vu. br /> a hrefhttps://arxiv.org/pdf/2506.12095 target_blank relnoopener>PDF/a> br /> u>b>i>Abstract/i>/b>:/u> Achieving robust robot learning for humanoid locomotion is a fundamental challenge in model-based reinforcement learning (MBRL), where environmental stochasticity and randomness can hinder efficient exploration and learning stability. The environmental, so-called span classcollapse idmore_doublyaware> aleatoric, uncertainty can be amplified in high-dimensional action spaces with complex contact dynamics, and further entangled with epistemic uncertainty in the models during learning phases. In this work, we propose i>DoublyAware/i>, an uncertainty-aware extension of Temporal Difference Model Predictive Control (TD-MPC) that explicitly decomposes uncertainty into two disjoint interpretable components, i.e., planning and policy uncertainties. To handle the planning uncertainty, i>DoublyAware/i> employs conformal prediction to filter candidate trajectories using quantile-calibrated risk bounds, ensuring statistical consistency and robustness against stochastic dynamics. Meanwhile, policy rollouts are leveraged as structured informative priors to support the learning phase with Group-Relative Policy Constraint (GRPC) optimizers that impose a group-based adaptive trust-region in the latent action space. This principled combination enables the robot agent to prioritize high-confidence, high-reward behavior while maintaining effective, targeted exploration under uncertainty. Evaluated on the HumanoidBench locomotion suite with the Unitree 26-DoF H1-2 humanoid, i>DoublyAware/i> demonstrates improved sample efficiency, accelerated convergence, and enhanced motion feasibility compared to RL baselines. Our simulation results emphasize the significance of structured uncertainty modeling for data-efficient and reliable decision-making in TD-MPC-based humanoid locomotion learning. /span> span> a href#more_doublyaware data-togglecollapse onclicktoggleText(this) idlink-more_doublyaware>... See More/a>/span> /div> /div>/div> div classproject data-filterothers> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_refinevla.png /> /div> div classcol-sm-8> b>Multimodal Reasoning-Aware Generalist Robotic Policies via Teacher-Guided Fine-Tuning/b> br /> i>a href target_blank relnoopener>arXiv (25/05/2025)/a>/i> br /> Tuan Van Vo, Quang-Tan Nguyen, u>Khang Nguyen/u>, Nhat Xuan Tran, Duy H. M. Nguyen, An Thai Le, Ngo Anh Vien, Minh Nhat Vu. br /> a hrefhttps://arxiv.org/pdf/2505.19080 target_blank relnoopener>PDF/a> br /> u>b>i>Abstract/i>/b>:/u> Vision-Language-Action (VLA) models have gained much attention from the research community thanks to their strength in translating multimodal observations with linguistic instructions into desired robotic actions. Despite their advancements, VLAs often overlook explicit reasoning and learn the functional input-action mappings, omitting crucial logical steps, which are span classcollapse idmore_refinevla> especially pronounced in interpretability and generalization for complex, long-horizon manipulation tasks. In this work, we propose i>ReFineVLA/i>, a multimodal reasoning-aware framework that fine-tunes VLAs with teacher-guided reasons. We first augment robotic datasets with reasoning rationales generated by an expert teacher model, guiding VLA models to learn to reason about their actions. Then, we fine-tune pre-trained VLAs with the reasoning-enriched datasets with i>ReFineVLA/i>, while maintaining the underlying generalization abilities and boosting reasoning capabilities. We also conduct attention map visualization to analyze the alignment among visual observation, linguistic prompts, and to-be-executed actions of i>ReFineVLA/i>, reflecting the models ability to focus on relevant tasks and actions. Through this additional step, we explore that i>ReFineVLA/i>-trained models exhibit a meaningful agreement between vision-language and action domains, highlighting the enhanced multimodal understanding and generalization. Evaluated across a suite of simulated manipulation benchmarks on SimplerEnv with both WidowX and Google Robot tasks, i>ReFineVLA/i> achieves state-of-the-art performance, with an average 5.0% improvement in success rate over the second-best method on the WidowX benchmark, reaching 47.7% task success. In more visually and contextually diverse scenarios, i>ReFineVLA/i> yields 3.5% and 2.3% gains in variant aggregation (68.8%) and visual matching (76.6%) settings, respectively. Notably, it improves performance by 9.6% on the Move Near task and 8.2% on Open/Close Drawer in challenging settings. /span> span> a href#more_refinevla data-togglecollapse onclicktoggleText(this) idlink-more_refinevla>... See More/a>/span> /div> /div>/div> div classproject data-filterplanning control> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_tdgrpc.gif /> /div> div classcol-sm-8> b>i>TD-GRPC/i>:/b> b>Temporal Difference Learning with Group Relative Policy Constraint for Humanoid Locomotion/b> br /> i>a href target_blank relnoopener>arXiv (19/05/2025)/a>/i> br /> u>Khang Nguyen/u>, An Thai Le, Khai Nguyen, Jan Peters, Manfred Huber, Ngo Anh Vien, Minh Nhat Vu. br /> a hrefhttps://arxiv.org/pdf/2505.13549 target_blank relnoopener>PDF/a> br /> u>b>i>Abstract/i>/b>:/u> Robot learning in high-dimensional control settings, such as humanoid locomotion, presents persistent challenges for reinforcement learning (RL) algorithms due to unstable dynamics, complex contact interactions, and sensitivity to distributional shifts during training. span classcollapse idmore_tdgrpc> Model-based methods, e.g., Temporal-Difference Model Predictive Control (TD-MPC), have demonstrated promising results by combining short-horizon planning with value-based learning, enabling efficient solutions for basic locomotion tasks. However, these approaches remain ineffective in addressing policy mismatch and instability introduced by off-policy updates. Thus, in this work, we introduce Temporal-Difference Group Relative Policy Constraint (i>TD-GRPC/i>), an extension of the TD-MPC framework that unifies Group Relative Policy Optimization (GRPO) with explicit Policy Constraints (PC). i>TD-GRPC/i> applies a trust-region constraint in the latent policy space to maintain consistency between the planning priors and learned rollouts, while leveraging group-relative ranking to assess and preserve the physical feasibility of candidate trajectories. Unlike prior methods, i>TD-GRPC/i> achieves robust motions without modifying the underlying planner, enabling flexible planning and policy learning. We validate our method across a locomotion task suite ranging from basic walking to highly dynamic movements on the 26-DoF Unitree H1-2 humanoid robot. Through simulation results, i>TD-GRPC/i> demonstrates improvements in stability and policy robustness, along with increased sampling efficiency, during training for complex humanoid control tasks. /span> span> a href#more_tdgrpc data-togglecollapse onclicktoggleText(this) idlink-more_tdgrpc>... See More/a>/span> /div> /div>/div> div classproject data-filterperception manipulation> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_deguv.gif /> /div> div classcol-sm-8> b>i>DeGuV/i>:/b> b>Depth-Guided Visual Reinforcement Learning for Generalization and Interpretability in Manipulation/b> br /> i>a href target_blank relnoopener>arXiv (05/09/2025)/a>/i> br /> Tien Pham, Xinyun Chi, u>Khang Nguyen/u>, Manfred Huber, Angelo Cangelosi. br /> a hrefhttps://arxiv.org/pdf/2509.04970 target_blank relnoopener>PDF/a> br /> u>b>i>Abstract/i>/b>:/u> Reinforcement learning (RL) agents can learn to solve complex tasks from visual inputs, but generalizing these learned skills to new environments remains a major challenge in RL applications, especially robotics. While data augmentation can improve generalization, span classcollapse idmore_deguv> it often compromises sample efficiency and training stability. This paper introduces i>DeGuV/i>, an RL framework that enhances both generalization and sample efficiency. In specific, we leverage a learnable masker network that produces a mask from the depth input, preserving only critical visual information while discarding irrelevant pixels. Through this, we ensure that our RL agents focus on essential features, improving robustness under data augmentation. In addition, we incorporate contrastive learning and stabilize Q-value estimation under augmentation to further enhance sample efficiency and training stability. We evaluate our proposed method on the RL-ViGen benchmark using the Franka Emika robot and demonstrate its effectiveness in zero-shot sim-to-real transfer. Our results show that i>DeGuV/i> outperforms state-of-the-art methods in both generalization and sample efficiency while also improving interpretability by highlighting the most relevant regions in the visual input. /span> span> a href#more_deguv data-togglecollapse onclicktoggleText(this) idlink-more_deguv>... See More/a>/span> /div> /div>/div> div classproject data-filterselected> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_flowmp.gif /> /div> div classcol-sm-8> b>i>FlowMP/i>:/b> b>Learning Motion Fields for Robot Planning with Conditional Flow Matching/b> br /> i>a hrefhttps://www.iros25.org/ target_blank relnoopener>IROS 2025 (Hangzhou, China)/a>/i> br /> u>Khang Nguyen/u>, An Thai Le, Tien Pham, Manfred Huber, Jan Peters, Minh Nhat Vu. br /> a hrefhttps://mkhangg.com/assets/papers/nguyen2025flowmp.pdf target_blank relnoopener>PDF/a> | a hrefhttps://github.com/mkhangg/flow_mp target_blank relnoopener>CODE/a> br /> u>b>i>Abstract/i>/b>:/u> Prior flow matching methods in robotics have primarily learned velocity fields to morph one distribution of trajectories into another. In this work, we extend flow matching to capture second-order trajectory dynamics, incorporating acceleration effects either explicitly in the model or implicitly through the learning objective. Unlike diffusion models, which rely on span classcollapse idmore_flow_mp> a noisy forward process and iterative denoising steps, flow matching trains a continuous transformation (flow) that directly maps a simple prior distribution to the target trajectory distribution without any denoising procedure. By modeling trajectories with second-order dynamics, our approach ensures that generated robot motions are smooth and physically executable, avoiding the jerky or dynamically infeasible trajectories that first-order models might produce. We empirically demonstrate that this second-order conditional flow matching yields superior performance on motion planning benchmarks, achieving smoother trajectories and higher success rates than baseline planners. These findings highlight the advantage of learning acceleration-aware motion fields, as our method outperforms existing motion planning methods in terms of trajectory quality and planning success. /span> span> a href#more_flow_mp data-togglecollapse onclicktoggleText(this) idlink-more_flow_mp>... See More/a>/span> /div> /div>/div> div classproject data-filterplanning control> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_flowmp.gif /> /div> div classcol-sm-8> b>i>FlowMP/i>:/b> b>Learning Motion Fields for Robot Planning with Conditional Flow Matching/b> br /> i>a hrefhttps://www.iros25.org/ target_blank relnoopener>IROS 2025 (Hangzhou, China)/a>/i> br /> u>Khang Nguyen/u>, An Thai Le, Tien Pham, Manfred Huber, Jan Peters, Minh Nhat Vu. br /> a hrefhttps://mkhangg.com/assets/papers/nguyen2025flowmp.pdf target_blank relnoopener>PDF/a> | a hrefhttps://github.com/mkhangg/flow_mp target_blank relnoopener>CODE/a> br /> u>b>i>Abstract/i>/b>:/u> Prior flow matching methods in robotics have primarily learned velocity fields to morph one distribution of trajectories into another. In this work, we extend flow matching to capture second-order trajectory dynamics, incorporating acceleration effects either explicitly in the model or implicitly through the learning objective. Unlike diffusion models, which rely on span classcollapse idmore_flow_mp> a noisy forward process and iterative denoising steps, flow matching trains a continuous transformation (flow) that directly maps a simple prior distribution to the target trajectory distribution without any denoising procedure. By modeling trajectories with second-order dynamics, our approach ensures that generated robot motions are smooth and physically executable, avoiding the jerky or dynamically infeasible trajectories that first-order models might produce. We empirically demonstrate that this second-order conditional flow matching yields superior performance on motion planning benchmarks, achieving smoother trajectories and higher success rates than baseline planners. These findings highlight the advantage of learning acceleration-aware motion fields, as our method outperforms existing motion planning methods in terms of trajectory quality and planning success. /span> span> a href#more_flow_mp data-togglecollapse onclicktoggleText(this) idlink-more_flow_mp>... See More/a>/span> /div> /div>/div> div classproject data-filterplanning control> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_liquid_pouch.gif /> /div> div classcol-sm-8> b>Modeling The States of Liquid Phase Change Pouch Actuators by Reservoir Computing/b> br /> i>a hrefhttps://www.iros25.org/ target_blank relnoopener>IROS 2025 (Hangzhou, China)/a>/i> br /> Cedric Caremel*, u>Khang Nguyen/u>*, Anh Nguyen, Manfred Huber, Yoshihiro Kawahara, Tung D. Ta. br /> a hrefhttps://mkhangg.com/assets/papers/nguyen2025modeling.pdf target_blank relnoopener>PDF/a> | a hrefhttps://github.com/tatung/liquidpouch_reservoir target_blank relnoopener>CODE/a> br /> u>b>i>Abstract/i>/b>:/u> Liquid phase change pouch actuators (liquid pouch motors) hold great promise for a wide range of robotic applications, from artificial organs to pneumatic manipulators for dexterous manipulation. However, the usability of liquid pouch motors remains challenging due to the nonlinear intrinsic properties of liquids and their respective highly dynamic implications for span classcollapse idmore_liquid_pouch> liquid-gas phase changes, complicating state modeling and estimation. To resolve these problems, we present a reservoir computing-based method for modeling the inflation states of a customized liquid pouch motor, serving as an actuator, with four Peltier heating junctions. We use a motion capture system to track the landmark movements on the pouch, a proxy for its volumetric profile. These movements represent the internal liquid-gas phase changes of the pouch with stable room temperature, atmospheric pressure, and electrical noises. The motion coordinates are thus learned by our reservoir computing framework, PhysRes, to model the states based on prior observations. Through training, our model achieves excellent results on the test set, with a normalized root mean squared error of 0.0041 in estimating the control points and a corresponding volumetric error of 0.0160%. To further demonstrate how such actuators could be implemented in the future, we also design a dual-pouch actuator-based robotic gripper to control the grasping of soft objects. /span> span> a href#more_liquid_pouch data-togglecollapse onclicktoggleText(this) idlink-more_liquid_pouch>... See More/a>/span> /div> /div>/div> div classproject data-filterothers> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_model_vulnerability.gif /> /div> div classcol-sm-8> b>Distortion-Aware Adversarial Attacks on Bounding Boxes of Object Detectors/b> br /> i>a hrefhttps://visapp.scitevents.org/ target_blank relnoopener>VISAPP 2025 (Porto, Portugal)/a>/i> br /> Phuc Pham, Son Vuong, u>Khang Nguyen/u>, Tuan Dang. br /> a hrefhttps://mkhangg.com/assets/papers/pham2025distortion.pdf target_blank relnoopener>PDF/a> | a hrefhttps://github.com/anonymous20210106/attack_detector target_blank relnoopener>CODE/a> | a hrefhttps://youtu.be/y_sQqECMJIk target_blank relnoopener>DEMO/a> br /> u>b>i>Abstract/i>/b>:/u> Deep learning-based object detection has become ubiquitous in the last decade due to its high accuracy in many real-world applications. With this growing trend, these models are interested in being attacked by adversaries, with most of the results being on classifiers, which span classcollapse idmore_model_vulnerability> do not match the context of practical object detection. In this work, we propose a novel method to fool object detectors, expose the vulnerability of state-of-the-art detectors, and promote later works to build more robust detectors to adversarial examples. Our method aims to generate adversarial images by perturbing object confidence scores during training, which is crucial in predicting confidence for each class in the testing phase. Herein, we provide a more intuitive technique to embed additive noises based on detected objects masks and the training loss with distortion control over the original image by leveraging the gradient of iterative images. To verify the proposed method, we perform adversarial attacks against different object detectors, including the most recent state-of-the-art models like YOLOv8, Faster R-CNN, RetinaNet, and Swin Transformer. We also evaluate our technique on MS COCO 2017 and PASCAL VOC 2012 datasets and analyze the trade-off between success attack rate and image distortion. Our experiments show that the achievable success attack rate is up to 100% and up to 98% when performing white-box and black-box attacks, respectively. /span> span> a href#more_model_vulnerability data-togglecollapse onclicktoggleText(this) idlink-more_model_vulnerability>... See More/a>/span> /div> /div>/div> div classproject data-filterothers> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_bachelor_thesis.gif /> /div> div classcol-sm-8> b>Hand-Eye-Force Coordination for Robotic Manipulation/b> br /> i>a href target_blank relnoopener>Bachelor Thesis @ UTA/a>/i> br /> u>Khang Nguyen/u>. br /> a hrefhttps://mkhangg.com/assets/theses/nguyen2024hand.pdf target_blank relnoopener>PDF/a> br /> u>b>i>Abstract/i>/b>:/u> In vision-based robotic manipulation, when a robot identifies an object to grasp, the knowledge of the objects positional, geometrical, and physical properties is not perfect. Deformable objects, such as soda cans, plastic bottles, and paper cups, pose the best challenges in learning the uncertainty of these properties in terms of grasping. To grasp these, the robot must span classcollapse idmore_hand_eye_force> adaptively control and coordinate its hands, eyes, and fingertip forces to such objects under diverse unstructured representations. In other words, the robots hands, eyes, and the amount of applied forces must be well-coordinated. This thesis explores the fundamentals of human-inspired mechanisms and applies them to robot grasping to develop hand-eye-force coordination for deformable object manipulation. With an object-finding task, the robot encountered an unstructured environment cluttered with known objects. First, it must look at the environments overview and store the scenes semantic information for later object-finding iterations. With the information stored, the robot must find the desired object, grasp it, and bring it back. To achieve the perception goal, the robot is first enabled to perceive the environment as a whole, like when humans encounter a newly explored scene, and to learn to recognize objects efficiently in three-dimensional space by emulating the visual selective attention model. Lastly, in some special cases, the robot might encounter an already-deformed object due to manipulative results by humans or itself in later iterations. To refine this more efficiently, the robot is also trained to re-recognize these items through a synthetic deformable object dataset, which is auto-generated using an intuitive Laplacian-based mesh deformation procedure. Throughout this thesis, these sub-problems are addressed, and the feasibility of each is demonstrated with experiments on a real robot system. /span> span> a href#more_hand_eye_force data-togglecollapse onclicktoggleText(this) idlink-more_hand_eye_force>... See More/a>/span> /div> /div>/div> div classproject data-filterselected> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_refined_mapping.gif /> /div> div classcol-sm-8> b>Volumetric Mapping with Panoptic Refinement via Kernel Density Estimation for Mobile Robots/b> br /> i>a hrefhttps://iros2024-abudhabi.org/ target_blank relnoopener>IROS 2024 (Abu Dhabi, United Arab Emirates)/a>/i> br /> u>Khang Nguyen/u>, Tuan Dang, Manfred Huber. br /> a hrefhttps://mkhangg.com/assets/papers/nguyen2024volumetric.pdf target_blank relnoopener>PDF/a> | a hrefhttps://github.com/mkhangg/refined_panoptic_mapping target_blank relnoopener>CODE/a> | a hrefhttps://youtu.be/u214kCms27M target_blank relnoopener>DEMO/a> | a hrefhttps://mkhangg.com/assets/slides/iros24b_slides.pdf target_blank relnoopener>SLIDES/a> | a hrefhttps://youtu.be/vQZMQApcTCY target_blank relnoopener>TALK/a> | a hrefhttps://mkhangg.com/assets/posters/iros24b_poster.pdf target_blank relnoopener>POSTER/a> br /> u>b>i>Abstract/i>/b>:/u> Reconstructing three-dimensional (3D) scenes with semantic understanding is vital in many robotic applications. Robots need to identify which objects, along with their positions and shapes, to manipulate them precisely with given tasks. Mobile robots, especially, usually use lightweight networks to segment objects on RGB images and then localize them via span classcollapse idmore_refined_mapping> depth maps; however, they often encounter out-of-distribution scenarios where masks over-cover the objects. In this paper, we address the problem of panoptic segmentation quality in 3D scene reconstruction by refining segmentation errors using non-parametric statistical methods. To enhance mask precision, we map the predicted masks into a depth frame to estimate their distribution via kernel densities. The outliers in depth perception are then rejected without the need for additional parameters in an adaptive manner to out-of-distribution scenarios, followed by 3D reconstruction using projective signed distance functions (SDFs). We validate our method on a synthetic dataset, which shows improvements in both quantitative and qualitative results for panoptic mapping. Through real-world testing, the results furthermore show our methods capability to be deployed on a real-robot system. /span> span> a href#more_refined_mapping data-togglecollapse onclicktoggleText(this) idlink-more_refined_mapping>... See More/a>/span> /div> /div>/div> div classproject data-filterslam> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_refined_mapping.gif /> /div> div classcol-sm-8> b>Volumetric Mapping with Panoptic Refinement via Kernel Density Estimation for Mobile Robots/b> br /> i>a hrefhttps://iros2024-abudhabi.org/ target_blank relnoopener>IROS 2024 (Abu Dhabi, United Arab Emirates)/a>/i> br /> u>Khang Nguyen/u>, Tuan Dang, Manfred Huber. br /> a hrefhttps://mkhangg.com/assets/papers/nguyen2024volumetric.pdf target_blank relnoopener>PDF/a> | a hrefhttps://github.com/mkhangg/refined_panoptic_mapping target_blank relnoopener>CODE/a> | a hrefhttps://youtu.be/u214kCms27M target_blank relnoopener>DEMO/a> | a hrefhttps://mkhangg.com/assets/slides/iros24b_slides.pdf target_blank relnoopener>SLIDES/a> | a hrefhttps://youtu.be/vQZMQApcTCY target_blank relnoopener>TALK/a> | a hrefhttps://mkhangg.com/assets/posters/iros24b_poster.pdf target_blank relnoopener>POSTER/a> br /> u>b>i>Abstract/i>/b>:/u> Reconstructing three-dimensional (3D) scenes with semantic understanding is vital in many robotic applications. Robots need to identify which objects, along with their positions and shapes, to manipulate them precisely with given tasks. Mobile robots, especially, usually use lightweight networks to segment objects on RGB images and then localize them via span classcollapse idmore_refined_mapping> depth maps; however, they often encounter out-of-distribution scenarios where masks over-cover the objects. In this paper, we address the problem of panoptic segmentation quality in 3D scene reconstruction by refining segmentation errors using non-parametric statistical methods. To enhance mask precision, we map the predicted masks into a depth frame to estimate their distribution via kernel densities. The outliers in depth perception are then rejected without the need for additional parameters in an adaptive manner to out-of-distribution scenarios, followed by 3D reconstruction using projective signed distance functions (SDFs). We validate our method on a synthetic dataset, which shows improvements in both quantitative and qualitative results for panoptic mapping. Through real-world testing, the results furthermore show our methods capability to be deployed on a real-robot system. /span> span> a href#more_refined_mapping data-togglecollapse onclicktoggleText(this) idlink-more_refined_mapping>... See More/a>/span> /div> /div>/div> div classproject data-filterslam> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_v3d_slam.gif /> /div> div classcol-sm-8> b>i>V3D-SLAM/i>:/b> b>Robust RGB-D SLAM in Dynamic Environments with 3D Semantic Geometry Voting/b> br /> i>a hrefhttps://iros2024-abudhabi.org/ target_blank relnoopener>IROS 2024 (Abu Dhabi, United Arab Emirates)/a>/i> br /> Tuan Dang, u>Khang Nguyen/u>, Manfred Huber. br /> a hrefhttps://www.tuandang.info/assets/papers/v3d-slam.pdf target_blank relnoopener>PDF/a> | a hrefhttps://github.com/tuantdang/v3d-slam target_blank relnoopener>CODE/a> | a hrefhttps://youtu.be/K4RcKrASpqI target_blank relnoopener>DEMO/a> | a hrefhttps://mkhangg.com/assets/slides/iros24a_slides.pdf target_blank relnoopener>SLIDES/a> | a hrefhttps://youtu.be/aWGu9Qxow7g target_blank relnoopener>TALK/a> | a hrefhttps://mkhangg.com/assets/posters/iros24a_poster.pdf target_blank relnoopener>POSTER/a> br /> u>b>i>Abstract/i>/b>:/u> Simultaneous localization and mapping (SLAM) in highly dynamic environments is challenging due to the correlation complexity between moving objects and the camera pose. Many methods have been proposed to deal with this problem; however, the moving properties of dynamic objects with a moving camera remain unclear. Therefore, to improve SLAMs span classcollapse idmore_v3d_slam> performance, minimizing disruptive events of moving objects with a physical understanding of 3D shapes and dynamics of objects is needed. In this paper, we propose a robust method, V3D-SLAM, to remove moving objects via two lightweight re-evaluation stages, including identifying potentially moving and static objects using a spatial-reasoned Hough voting mechanism and refining static objects by detecting dynamic noise caused by intra-object motions using Chamfer distances as similarity measurements. Through our experiment on the TUM RGB-D benchmark on dynamic sequences with ground-truth camera trajectories, the results show that our methods outperform most other recent state-of-the-art SLAM methods. /span> span> a href#more_v3d_slam data-togglecollapse onclicktoggleText(this) idlink-more_v3d_slam>... See More/a>/span> /div> /div>/div> div classproject data-filterperception manipulation> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_scene_perception.gif /> /div> div classcol-sm-8> b>Real-Time 3D Semantic Scene Perception for Egocentric Robots with Binocular Vision/b> br /> i>a href target_blank relnoopener>arXiv (19/02/2024)/a>/i> br /> u>Khang Nguyen/u>, Tuan Dang, Manfred Huber. br /> a hrefhttps://arxiv.org/pdf/2402.11872.pdf target_blank relnoopener>PDF/a> | a hrefhttps://github.com/mkhangg/semantic_scene_perception target_blank relnoopener>CODE/a> | a hrefhttps://youtu.be/-dho7l_r56U target_blank relnoopener>DEMO/a> br /> u>b>i>Abstract/i>/b>:/u> Perceiving a three-dimensional (3D) scene with multiple objects while moving indoors is essential for vision-based mobile cobots, especially for enhancing their manipulation tasks. In this work, we present an end-to-end pipeline with instance segmentation, feature matching, and point-set registration for egocentric robots with binocular vision, and demonstrate the robots span classcollapse idmore_scene_perception> grasping capability through the proposed pipeline. First, we design an RGB image-based segmentation approach for single-view 3D semantic scene segmentation, leveraging common object classes in 2D datasets to encapsulate 3D points into point clouds of object instances through corresponding depth maps. Next, 3D correspondences of two consecutive segmented point clouds are extracted based on matched keypoints between objects of interest in RGB images from the prior step. In addition, to be aware of spatial changes in 3D feature distribution, we also weigh each 3D point pair based on the estimated distribution using kernel density estimation (KDE), which subsequently gives robustness with less central correspondences while solving for rigid transformations between point clouds. Finally, we test our proposed pipeline on the 7-DOF dual-arm Baxter robot with a mounted Intel RealSense D435i RGB-D camera. The result shows that our robot can segment objects of interest, register multiple views while moving, and grasp the target object. /span> span> a href#more_scene_perception data-togglecollapse onclicktoggleText(this) idlink-more_scene_perception>... See More/a>/span> /div> /div>/div> div classproject data-filterperception manipulation> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_online.gif /> /div> div classcol-sm-8> b>Online 3D Deformable Object Classification for Mobile Cobot Manipulation/b> br /> i>a hrefhttps://www.isr-robotics.org/isr target_blank relnoopener>ISR Europe 2023 (Stuttgart, Baden-Wurttemberg, Germany)/a>/i> br /> u>Khang Nguyen/u>, Tuan Dang, Manfred Huber. br /> a hrefhttps://mkhangg.com/assets/papers/nguyen2023online.pdf target_blank relnoopener>PDF/a> | a hrefhttps://github.com/mkhangg/deformable_cobot target_blank relnoopener>CODE/a> | a hrefhttps://youtu.be/qkgi3T6xYzI target_blank relnoopener>DEMO/a> | a hrefhttps://mkhangg.com/assets/slides/isr23_slides.pdf target_blank relnoopener>SLIDES/a> | a hrefhttps://youtu.be/ATzyXtLAK6E target_blank relnoopener>TALK/a> br /> u>b>i>Abstract/i>/b>:/u> Vision-based object manipulation in assistive mobile cobots essentially relies on classifying the target objects based on their 3D shapes and features, whether they are deformed or not. In this work, we present an auto-generated dataset of deformed objects specific for assistive mobile cobot manipulation using an intuitive Laplacian-based mesh deformation span classcollapse idmore_online> procedure. We first determine the graspable region of the robot hand on the given objects mesh. Then, we uniformly sample handle points within the graspable region and perform deformation with multiple handle points based on the robot gripper configuration. In each deformation, we identify the orientation of handle points and prevent self-intersection to guarantee the objects physical meaning when multiple handle points are simultaneously applied to the mesh at different deformation intensities. We also introduce a lightweight neural network for 3D deformable object classification. Finally, we test our generated dataset on the Baxter robot with two 7-DOF arms, an integrated RGB-D camera, and a 3D deformable object classifier. The result shows that the robot is able to classify real-world deformed objects from point clouds captured at multiple views by the RGB-D camera. /span> span> a href#more_online data-togglecollapse onclicktoggleText(this) idlink-more_online>... See More/a>/span> /div> /div>/div> div classproject data-filterperception manipulation> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_multiplanar.gif /> /div> div classcol-sm-8> b>Multiplanar Self-Calibration for Mobile Cobot 3D Object Manipulation using 2D Detectors and Depth Estimation/b> br /> i>a hrefhttps://ieee-iros.org/ target_blank relnoopener>IROS 2023 (Detroit, MI, U.S.)/a>/i> br /> Tuan Dang, u>Khang Nguyen/u>, Manfred Huber. br /> a hrefhttps://ieeexplore.ieee.org/stamp/stamp.jsp?tp&arnumber10341911 target_blank relnoopener>PDF/a> | a hrefhttps://github.com/tuantdang/calib_cobot target_blank relnoopener>CODE/a> | a hrefhttps://youtu.be/KrDJ22rvOAo target_blank relnoopener>DEMO/a> br /> u>b>i>Abstract/i>/b>:/u> Calibration is the first and foremost step in dealing with sensor displacement errors that can appear during extended operation and off-time periods to enable robot object manipulation with precision. In this paper, we present a novel multiplanar self-calibration between the span classcollapse idmore_multiplanar> camera system and the robots end-effector for 3D object manipulation. Our approach first takes the robot end-effector as ground truth to calibrate the camera’s position and orientation while the robot arm moves the object in multiple planes in 3D space, and a 2D state-of-the-art vision detector identifies the object’s center in the image coordinates system. The transformation between world coordinates and image coordinates is then computed using 2D pixels from the detector and 3D known points obtained by robot kinematics. Next, an integrated stereo-vision system estimates the distance between the camera and the object, resulting in 3D object localization. We test our proposed method on the Baxter robot with two 7-DOF arms and a 2D detector that can run in real time on an onboard GPU. After self-calibrating, our robot can localize objects in 3D using an RGB camera and depth image. /span> span> a href#more_multiplanar data-togglecollapse onclicktoggleText(this) idlink-more_multiplanar>... See More/a>/span> /div> /div>/div> div classproject data-filterothers> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_extperfc.gif /> /div> div classcol-sm-8> b>i>ExtPerFC/i>:/b> b>An Efficient 2D & 3D Perception Software-Hardware Framework for Mobile Cobot/b> br /> i>a href target_blank relnoopener>arXiv (08/06/2023)/a>/i> br /> Tuan Dang, u>Khang Nguyen/u>, Manfred Huber. br /> a hrefhttps://arxiv.org/pdf/2306.04853.pdf target_blank relnoopener>PDF/a> | a hrefhttps://github.com/tuantdang/perception_framework target_blank relnoopener>CODE/a> | a hrefhttps://youtu.be/q4oz9Rixbzs target_blank relnoopener>DEMO/a> br /> u>b>i>Abstract/i>/b>:/u> As the reliability of the robots perception correlates with the number of integrated sensing modalities to tackle uncertainty, a practical solution to manage these sensors from different computers, operate them simultaneously, and maintain their real-time performance on the existing robotic system with minimal effort is needed. In this work, we present an end-to-end span classcollapse idmore_extperfc> software-hardware framework, namely i>ExtPerFC/i>, that supports both conventional hardware and software components and integrates machine learning object detectors without requiring an additional dedicated graphic processor unit (GPU). We first design our framework to achieve real-time performance on the existing robotic system, guarantee configuration optimization, and concentrate on code reusability. We then mathematically model and utilize our transfer learning strategies for 2D object detection and fuse them into depth images for 3D depth estimation. Lastly, we systematically test the proposed framework on the Baxter robot with two 7-DOF arms, a four-wheel mobility base, and an Intel RealSense D435i RGB-D camera. The results show that the robot achieves real-time performance while executing other tasks (i>e.g./i>, map building, localization, navigation, object detection, arm moving, and grasping) simultaneously with available hardware like Intel onboard CPUs/GPUs on distributed computers. Also, to comprehensively control, program, and monitor the robot system, we design and introduce an end-user application. /span> span> a href#more_extperfc data-togglecollapse onclicktoggleText(this) idlink-more_extperfc>... See More/a>/span> /div> /div>/div> div classproject data-filterothers> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_perfc.gif /> /div> div classcol-sm-8> b>i>PerFC/i>:/b> b>An Efficient 2D and 3D Perception Software-Hardware Framework for Mobile Cobot/b> br /> i>a hrefhttps://www.flairs-36.info/home target_blank relnoopener>FLAIRS-36 (Clearwater Beach, FL, U.S.)/a>/i> br /> Tuan Dang, u>Khang Nguyen/u>, Manfred Huber. br /> a hrefhttps://journals.flvc.org/FLAIRS/article/view/133316/137627 target_blank relnoopener>PDF/a> | a hrefhttps://github.com/tuantdang/perception_framework target_blank relnoopener>CODE/a> | a hrefhttps://youtu.be/q4oz9Rixbzs target_blank relnoopener>DEMO/a> br /> u>b>i>Abstract/i>/b>:/u> In this work, we present an end-to-end software-hardware framework that supports both conventional hardware and software components and integrates machine learning object detectors without requiring an additional dedicated graphic processor unit (GPU). We design our framework to achieve real-time performance on the robot system, guarantee such performance on span classcollapse idmore_perfc> multiple computing devices, and concentrate on code reusability. We then utilize transfer learning strategies for 2D object detection and fuse them into depth images for 3D depth estimation. Lastly, we test the proposed framework on the Baxter robot with two 7-DOF arms and a four-wheel mobility base. The results show that the robot achieves real-time performance while executing other tasks (map building, localization, navigation, object detection, arm moving, and grasping) with available hardware like Intel onboard GPUs on distributed computers. Also, to comprehensively control, program, and monitor the robot system, we design and introduce an end-user application. /span> span> a href#more_perfc data-togglecollapse onclicktoggleText(this) idlink-more_perfc>... See More/a>/span> /div> /div>/div> div classproject data-filterothers> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_iotree.gif /> /div> div classcol-sm-8> b>i>IoTree/i>:/b> b>A Battery-free Wearable System with Biocompatible Sensors for Continuous Tree Health Monitoring/b> br /> i>a hrefhttps://www.sigmobile.org/mobicom/2022/ target_blank relnoopener>MobiCom 2022 (Sydney, NSW, Australia)/a>/i> br /> Tuan Dang, Trung Tran, u>Khang Nguyen/u>, Tien Pham, Nhat Pham, Tam Vu, Phuc Nguyen. br /> a hrefhttps://dl.acm.org/doi/pdf/10.1145/3495243.3567652 target_blank relnoopener>PDF/a> | a hrefhttps://github.com/tuantdang/iotree target_blank relnoopener>CODE/a> | a hrefhttps://youtu.be/8DUfOcuPwIk target_blank relnoopener>DEMO/a> br /> u>b>i>Abstract/i>/b>:/u> In this paper, we present a low-maintenance, wind-powered, batteryfree, biocompatible, tree wearable, and intelligent sensing system, namely i>IoTree/i>, to monitor water and nutrient levels inside a living tree. i>IoTree/i> system includes tiny-size, biocompatible, and implantable span classcollapse idmore_iotree> sensors that continuously measure the impedance variations inside the living tree’s xylem, where water and nutrients are transported from the root to the upper parts. The collected data are then compressed and transmitted to a base station located at up to 1.8 kilometers (approximately 1.1 miles) away. The entire i>IoTree/i> system is powered by wind energy and controlled by an adaptive computing technique called block-based intermittent computing, ensuring the forward progress and data consistency under intermittent power and allowing the firmware to execute with the most optimal memory and energy usage. We prototype i>IoTree/i> that opportunistically performs sensing, data compression, and long-range communication tasks without batteries. During in-lab experiments, i>IoTree/i> also obtains the accuracy of 91.08% and 90.51% in measuring 10 levels of nutrients, NHsub>3/sub> and Ksub>2/sub>O, respectively. While tested with Burkwood Viburnum and White Bird trees in the indoor environment, i>IoTree/i> data strongly correlated with multiple watering and fertilizing events. We also deployed i>IoTree/i> on a grapevine farm for 30 days, and the system is able to provide sufficient measurements every day. /span> span> a href#more_iotree data-togglecollapse onclicktoggleText(this) idlink-more_iotree>... See More/a>/span> /div> /div>/div> /div> p>/p> /div>/div>!-- Outreach section -->hr />div classrow idoutreach> div classcol> h2 clssmb-5>🧩 outreach activities/h2> p>/p> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_sawyer.gif /> /div> div classcol-sm-8> b>Lightweight Semantic Perception Module for Autonomous Systems/b> br /> i>a href target_blank relnoopener>Senior Design (Arlington, TX, U.S.)/a>/i> br /> Zobia Tahir, Diya Ranjit, Jose Morales, ChangHao Yang, u>Khang Nguyen/u>. br /> a href>POST/a> br /> u>b>i>Description/i>/b>:/u> We build a versatile software module for semantic scene understanding for broadly vision-based automation applications, including robots or unmanned vehicles installed with the Intel RealSense camera family with resource-constraint computing platforms. The video demonstrates the Sawyer manipulator to see the bottle, pick it, and put it into a destination box. br /> /div>/div> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_outdoor_nav.gif /> /div> div classcol-sm-8> b>Autonomous Waypoint Navigation with GPS and ArduRover/b> br /> i>a href target_blank relnoopener>UVS Design (Arlington, TX, U.S.)/a>/i> br /> Kevin Mathew, Jesus Garza Munoz, Benjamin Nguyen, u>Khang Nguyen/u>. br /> a href>POST/a> | a hrefhttps://youtu.be/5QHfGqPYY8Y target_blank relnoopener>DEMO/a> br /> u>b>i>Description/i>/b>:/u> We design the rover platform with hardware-software integration for autonomous waypoint navigation using Here3 GPS and ArduRover for outdoor activities. The video demonstrates the rovers mission to go through 10 defined waypoints on the UTA campus through Mission Planner via wireless communication through the Hex cube (Pixhawk 2.1) controller. br /> /div>/div> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_spidey.gif /> /div> div classcol-sm-8> b>i>Spidey/i>:/b> b>An Autonomous Spatial Voice Localization Crawling Robot/b> br /> i>a hrefhttps://hackmit.org/ target_blank relnoopener>HackMIT 2022 (Boston, MA, U.S.)/a>/i> br /> u>Khang Nguyen/u>. br /> a hrefhttps://spectacle.hackmit.org/project/185 target_blank relnoopener>POST/a> | a hrefhttps://github.com/mkhangg/hackmit22 target_blank relnoopener>CODE/a> | a hrefhttps://youtu.be/m1g6fkH6Zvg target_blank relnoopener>DEMO/a> br /> u>b>i>Description/i>/b>:/u> We present an autonomous spatial voice localization crawling robot demonstrating the potential of assistive technology for people with visual impairment to ask for help whenever they are in a spatial area without physical assistance. br /> u>b>i>Prize/i>/b>:/u> Won Sponsorship Award for Assistive Technologies over 198 competing teams. /div>/div> div classrow mb-4> div classcol-sm-4> img width100% heightauto classw-full rounded-lg srcassets/img/demo_iplanter.gif /> /div> div classcol-sm-8> b>i>iPlanter/i>:/b> b>An Autonomous Ground Monitoring and Tree Planting Robot/b> br /> i>a hrefhttps://robotech2022.devpost.com/ target_blank relnoopener>GT IEEE RoboTech 2022 (Atlanta, GA, U.S.)/a>/i> br /> u>Khang Nguyen/u>, Muhtasim Mahfuz, Vincent Kipchoge, Johnwon Hyeon. br /> a hrefhttps://devpost.com/software/tree-planting-robot target_blank relnoopener>POST/a> | a hrefhttps://github.com/mkhangg/robotech22 target_blank relnoopener>CODE/a> | a hrefhttps://youtu.be/GZ0oAX-lLSM target_blank relnoopener>DEMO/a> br /> u>b>i>Description/i>/b>:/u> Our tree planting robot demonstrates the an on-farm surveying robot that autonomously determines soil quality, plants seeds, and collects on-ground images. br /> u>b>i>Prize/i>/b>:/u> Won 2nd place in Body Track, 3rd place in Electrical Track, and Top 8 Prizes over 47 competing teams (approximately 160 participants). /div>/div> p>/p> /div>/div>!-- Resources section -->hr />div classrow idresources> div classcol> h2 clssmb-5>⛏️ resources/h2> p>/p> all software releases of the above projects can also be found here! p>/p> div idfilters-resources> button classfilter-button active data-filter*>all/button> button classfilter-button data-filterresearch>research/button> button classfilter-button data-filteroutreach>outreach/button> button classfilter-button data-filtermisc>miscellaneous/button> /div> p>/p> div idgithub-cards classisotope> div classgithub-card data-filterresearch> div data-urlhttps://api.github.com/repos/mkhangg/refined_panoptic_mapping>/div>/div> div classgithub-card data-filterresearch> div data-urlhttps://api.github.com/repos/tuantdang/v3d-slam>/div>/div> div classgithub-card data-filterresearch> div data-urlhttps://api.github.com/repos/mkhangg/semantic_scene_perception>/div>/div> div classgithub-card data-filtermisc> div data-urlhttps://api.github.com/repos/mkhangg/academic-website>/div>/div> div classgithub-card data-filterresearch> div data-urlhttps://api.github.com/repos/mkhangg/deformable_cobot>/div>/div> div classgithub-card data-filterresearch> div data-urlhttps://api.github.com/repos/mkhangg/calib_cobot>/div>/div> div classgithub-card data-filterresearch> div data-urlhttps://api.github.com/repos/mkhangg/perception_framework>/div>/div> div classgithub-card data-filteroutreach> div data-urlhttps://api.github.com/repos/mkhangg/hackmit22>/div>/div> div classgithub-card data-filterresearch> div data-urlhttps://api.github.com/repos/tuantdang/iotree>/div>/div> div classgithub-card data-filteroutreach> div data-urlhttps://api.github.com/repos/mkhangg/robotech22>/div>/div> /div> p>/p> /div>/div>!-- Gallery section -->hr />div classrow idgallery> div classcol> h2 clssmb-5>🖼️ gallery/h2> p>/p> not all about robots, but also some happy moments of my life. p>/p> div classgallery> img srcassets/memo/grand_mosque.jpg altvisited sheikh zayed grand mosque width800 height600 /> div classdesc>visited sheikh zayed grand mosque (2024)/div>/div> div classgallery> img srcassets/memo/vietnam_summer.jpg altspent summer in vietnam width800 height600 /> div classdesc>spent summer in vietnam (2024)/div>/div> div classgallery> img srcassets/memo/iros23.jpg altme and tuan also met new friends at iros width800 height600 /> div classdesc>me and tuan also met new friends at iros (2023)/div>/div> div classgallery> img srcassets/memo/snow_football.jpg altfirst time playing football on snow width800 height600 /> div classdesc>first time playing football on snow (2022)/div>/div> div classgallery> img srcassets/memo/aquarium.jpg altschool of fish at georgia aquarium width800 height600 /> div classdesc>school of fish at georgia aquarium (2021)/div>/div> div classgallery> img srcassets/memo/ki_yeu.jpg altthese crazy boys from ptnk i1720 width800 height600 /> div classdesc>these crazy boys from ptnk i1720 (2020)/div>/div> div classgallery> img srcassets/memo/harvard.jpg altwolfram summer research program fieldtrip width800 height600 /> div classdesc>wolfram summer research program fieldtrip (2019)/div>/div> div classgallery> img srcassets/memo/family.jpg altmy sister started her mba width800 height600 /> div classdesc>my sister started her mba (2018)/div>/div> div classgallery> img srcassets/memo/nikolskaya_street.jpg altat nikolskaya street near red square width800 height600 /> div classdesc>at nikolskaya street near red square (2018)/div>/div> div classgallery> img srcassets/memo/doituyentoan.jpg altmy boys from tđn maths team width800 height600 /> div classdesc>my boys from tđn maths team (2017)/div>/div> p>/p> /div>/div>!-- Footer section -->div>/div>hr />footer classpt-2 my-md-2 pt-md-> div classrow justify-content-center> div classcol-7 col-md text-left align-self-center> p classh6>© khang nguyen, span idcurrentYear>/span>/p> a hrefhttps://github.com/mkhangg/academic-website target_blank relnoopener>b>> web source @github/b>/a> /div> div classcol col-md text-right> img classmr+4 srcassets/img/mbzuai_logo.png data-canonical-srcassets/img/mbzuai_logo.png altMBZUAI width135 /> /div> /div> p>/p>/footer> /div> !-- Bootstrap core JS--> script srchttps://code.jquery.com/jquery-3.5.1.min.js>/script> script srchttps://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js>/script> script srchttps://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js>/script> script srchttps://cdn.jsdelivr.net/npm/bootstrap@4.6.0/dist/js/bootstrap.bundle.min.js>/script> !-- Isotope JS --> script srchttps://cdn.jsdelivr.net/npm/isotope-layout@3.0.2/dist/isotope.pkgd.min.js>/script> !-- OwlCarousel2 JS --> script srchttps://cdnjs.cloudflare.com/ajax/libs/OwlCarousel2/2.3.4/owl.carousel.min.js>/script> !-- Axios JS --> script srchttps://cdn.jsdelivr.net/npm/axios/dist/axios.min.js>/script> !-- Third party plugin JS--> script srchttps://cdnjs.cloudflare.com/ajax/libs/animejs/3.2.1/anime.min.js>/script> !-- Core theme JS--> script srcjs/scripts.js>/script> /body>/html>
View on OTX
|
View on ThreatMiner
Please enable JavaScript to view the
comments powered by Disqus.
Data with thanks to
AlienVault OTX
,
VirusTotal
,
Malwr
and
others
. [
Sitemap
]