NeuroMechFly v2: simulating embodied sensorimotor control in adult Drosophila

23 min read Original article ↗

Data availability

Data are available via The Harvard Dataverse Repository at https://doi.org/10.7910/DVN/3MCEYR (ref. 78). This repository includes (i) the experimentally recorded walking kinematics, (ii) trained parameters of the path integration models, (iii) trained parameters of the head stabilization models, (iv) trained parameters of the visual processing and reinforcement learning models in the multimodal navigation task, (v) training data for the visual processing model, and the graph representation of the ommatidia lattice used to perform graph convolution, (vi) the simulated complex plume dataset and (vii) baseline neuron activities in the connectome-constrained visual system model. Source data are provided with this paper.

Code availability

The FlyGym package is available at https://github.com/NeLy-EPFL/flygym/ under the Apache-2.0 license. The documentation for FlyGym, along with detailed tutorials for some experiments in this paper, is available at https://neuromechfly.org/.

The code used to generate some figures is not a part of the FlyGym package but is instead available at https://github.com/NeLy-EPFL/nmf2-paper under the same license.

A frozen snapshot of our code is available via Zenodo at https://doi.org/10.5281/zenodo.12973000 (ref. 79). However, FlyGym is under continued development and we recommend always using the latest version. Additionally, the results might not be bit-for-bit identical to the ones shown in this paper even with an exact copy of the code and its dependencies. This is due to differences in the computing hardware.

References

  1. Cruse, H., Kindermann, T., Schumm, M., Dean, J. & Schmitz, J. Walknet—a biologically inspired network to control six-legged walking. Neural Netw. 11, 1435–1447 (1998).

    PubMed  Google Scholar 

  2. Schumacher, P. et al. Natural and robust walking using reinforcement learning without demonstrations in high-dimensional musculoskeletal models. Preprint at https://arxiv.org/abs/2309.02976 (2023).

  3. Thandiackal, R. et al. Emergence of robust self-organized undulatory swimming based on local hydrodynamic force sensing. Sci. Robot. 6, eabf6354 (2021).

    PubMed  Google Scholar 

  4. Ijspeert, A. J., Crespi, A., Ryczko, D. & Cabelguen, J. From swimming to walking with a salamander robot driven by a spinal cord model. Science 315, 1416–1420 (2007).

    CAS  PubMed  Google Scholar 

  5. Towers, M. et al. Gymnasium. Zenodo https://doi.org/10.5281/zenodo.11232524 (2024).

  6. Levine, S., Finn, C., Darrell, T. & Abbeel, P. End-to-end training of deep visuomotor policies. J. Mach. Learning Res. 17, 1–40 (2016).

    Google Scholar 

  7. Shi, H., Lin, Z., Hwang, K., Yang, S. & Chen, J. An adaptive strategy selection method with reinforcement learning for robotic soccer games. IEEE Access 6, 8376–8386 (2018).

    Google Scholar 

  8. Ho, J. & Ermon, S. Generative adversarial imitation learning. In Proc. 30th International Conference on Neural Information Processing Systems. 4572–4580 (Curran Associates, 2016).

  9. Nagabandi, A., Kahn, G., Fearing, R. S. & Levine, S. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In 2018 IEEE International Conference on Robotics and Automation 7559–7566 (IEEE Press, 2018).

  10. Merel, J. et al. Deep neuroethology of a virtual rodent. In Proc. International Conference on Learning Representations (2020); https://openreview.net/forum?id=SyxrxR4KPS

  11. Choi, S. et al. Learning quadrupedal locomotion on deformable terrain. Sci. Robot. 8, eade2256 (2023).

    PubMed  Google Scholar 

  12. Caggiano, V., Wang, H., Durandau, G., Sartori, M. & Kumar, V. MyoSuite: a contact-rich simulation suite for musculoskeletal motor control. Proc. Mach. Learn. Res. 168, 492–507 (2022).

    Google Scholar 

  13. Lobato-Rios, V. et al. NeuroMechFly, a neuromechanical model of adult Drosophila melanogaster. Nat. Methods 19, 620–627 (2022).

    CAS  PubMed  Google Scholar 

  14. Vaxenburg, R. et al. Whole-body simulation of realistic fruit fly locomotion with deep reinforcement learning. Preprint at bioRxiv https://doi.org/10.1101/2024.03.11.584515 (2024).

  15. Aldarondo, D. et al. A virtual rodent predicts the structure of neural activity across behaviors. Nature 632, 594–602 (2024).

    PubMed  Google Scholar 

  16. Merel, J., Botvinick, M. & Wayne, G. Hierarchical motor control in mammals and machines. Nat. Commun. 10, 5489 (2019).

    PubMed  PubMed Central  Google Scholar 

  17. Raji, J. I. & Potter, C. J. The number of neurons in Drosophila and mosquito brains. PLoS ONE 16, e0250381 (2021).

    CAS  PubMed  PubMed Central  Google Scholar 

  18. Azevedo, A. et al. Connectomic reconstruction of a female Drosophila ventral nerve cord. Nature 631, 360–368 (2024).

    CAS  PubMed  Google Scholar 

  19. Pick, S. & Strauss, R. Goal-driven behavioral adaptations in gap-climbing Drosophila. Curr. Biol. 15, 1473–1478 (2005).

    CAS  PubMed  Google Scholar 

  20. Muijres, F. T., Elzinga, M. J., Melis, J. M. & Dickinson, M. H. Flies evade looming targets by executing rapid visually directed banked turns. Science 344, 172–177 (2014).

    CAS  PubMed  Google Scholar 

  21. Pavlou, H. J. & Goodwin, S. F. Courtship behavior in Drosophila melanogaster: towards a ‘courtship connectome’. Curr. Opin. Neurobiol. 23, 76–83 (2013).

    CAS  PubMed  PubMed Central  Google Scholar 

  22. Hoopfer, E. D. Neural control of aggression in Drosophila. Curr. Opin. Neurobiol. 38, 109–118 (2016).

    CAS  PubMed  Google Scholar 

  23. Wolf, R. et al. Drosophila mushroom bodies are dispensable for visual, tactile, and motor learning. Learn. Mem. 5, 166–178 (1998).

    CAS  PubMed  PubMed Central  Google Scholar 

  24. Dorkenwald, S. et al. Neuronal wiring diagram of an adult brain. Nature 634, 124–138 (2024).

    CAS  PubMed  PubMed Central  Google Scholar 

  25. Phelps, J. S. et al. Reconstruction of motor control circuits in adult Drosophila using automated transmission electron microscopy. Cell 184, 759–774 (2021).

    Google Scholar 

  26. Takemura, S. et al. A connectome of the male Drosophila ventral nerve cord. eLife 13, RP97769 (2024).

    Google Scholar 

  27. Jenett, A. et al. A GAL4-driver line resource for Drosophila neurobiology. Cell Rep. 2, 991–1001 (2012).

    CAS  PubMed  PubMed Central  Google Scholar 

  28. Klapoetke, N. C. et al. Independent optical excitation of distinct neural populations. Nat. Methods 11, 338–346 (2014).

    CAS  PubMed  PubMed Central  Google Scholar 

  29. Mohammad, F. et al. Optogenetic inhibition of behavior with anion channelrhodopsins. Nat. Methods 14, 271–274 (2017).

    CAS  PubMed  Google Scholar 

  30. Chen, T. et al. Ultrasensitive fluorescent proteins for imaging neuronal activity. Nature 499, 295–300 (2013).

    CAS  PubMed  PubMed Central  Google Scholar 

  31. Lappalainen, J. K. et al. Connectome-constrained networks predict neural activity across the fly visual system. Nature https://doi.org/10.1038/s41586-024-07939-3 (2024).

  32. Shiu, P. K. et al. A leaky integrate-and-fire computational model based on the connectome of the entire adult Drosophila brain reveals insights into sensorimotor processing. Preprint at bioRxiv https://doi.org/10.1101/2023.05.02.539144 (2023).

  33. Todorov, E., Erez, T. & Tassa, Y. MuJoCo: a physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. https://doi.org/10.1109/IROS.2012.6386109 (IEEE, 2012).

  34. Erez, T., Tassa, Y. & Todorov, E. Simulation tools for model-based robotics: comparison of Bullet, Havok, MuJoCo, ODE and PhysX. In 2015 IEEE International Conference on Robotics and Automation. https://doi.org/10.1109/ICRA.2015.7139807 (IEEE, 2015).

  35. Drechsler, P. & Federle, W. Biomechanics of smooth adhesive pads in insects: Influence of tarsal secretion on attachment performance. J. Comp. Physiol. A Neuroethol. Sens. Neural Behav. Physiol. 192, 1213–1222 (2006).

    PubMed  Google Scholar 

  36. Bullock, J. M. R., Drechsler, P. & Federle, W. Comparison of smooth and hairy attachment pads in insects: Friction, adhesion and mechanisms for direction-dependence. J. Exp. Biol. 211, 3333–3343 (2008).

    PubMed  Google Scholar 

  37. Gorb, S. N. et al. Structural design and biomechanics of friction-based releasable attachment devices in insects. Integr. Comp. Biol. 42, 1127–1139 (2002).

    PubMed  Google Scholar 

  38. Ramdya, P. et al. Climbing favours the tripod gait over alternative faster insect gaits. Nat. Commun. 8, 14494 (2017).

    CAS  PubMed  PubMed Central  Google Scholar 

  39. Szczecinski, N. S., Bockemühl, T., Chockley, A. S. & Büschges, A. Static stability predicts the continuum of interleg coordination patterns in Drosophila. J. Exp. Biol. 221, jeb189142 (2018).

    PubMed  Google Scholar 

  40. Mantziaris, C., Bockemühl, T. & Büschges, A. Central pattern generating networks in insect locomotion. Dev. Neurobiol. 80, 16–30 (2020).

    PubMed  Google Scholar 

  41. Bellegarda, G. & Ijspeert, A. CPG-RL: learning central pattern generators for quadruped locomotion. IEEE Robot. Autom. Lett. 7, 12547–12554 (2022).

    Google Scholar 

  42. Schneider, A., Paskarbeit, J., Schaeffersmann, M. & Schmitz, J. HECTOR, a new hexapod robot platform with increased mobility—control approach, design and communication. In Advances in Autonomous Mini Robots (eds. Rückert, U. et al.) 249–264 (Springer, 2012).

  43. Cagan, R. Principles of Drosophila eye differentiation. Curr. Top. Dev. Biol. 85, 115–135 (2009).

    Google Scholar 

  44. Wilson, R. I. Early olfactory processing in Drosophila: mechanisms and principles. Annu. Rev. Neurosci. 36, 217–241 (2013).

    CAS  PubMed  PubMed Central  Google Scholar 

  45. Cognigni, P., Felsenberg, J. & Waddell, S. Do the right thing: neural network mechanisms of memory formation, expression and update in Drosophila. Curr. Opinion Neurobiol. 49, 51–58 (2018).

    CAS  Google Scholar 

  46. Taisz, I. et al. Generating parallel representations of position and identity in the olfactory system. Cell 186, 2556–2573 (2023).

    Google Scholar 

  47. Chen, C. et al. Ascending neurons convey behavioral state to integrative sensory and action selection brain regions. Nat. Neurosci. 26, 682–695 (2023).

    CAS  PubMed  PubMed Central  Google Scholar 

  48. Kim, I. S. & Dickinson, M. H. Idiothetic path integration in the fruit fly Drosophila melanogaster. Curr. Biol. 27, 2227–2238 (2017).

    CAS  PubMed  Google Scholar 

  49. Gollin, A. & Dürr, V. Estimating body pitch from distributed proprioception in a hexapod. In Biomimetic and Biohybrid Systems (eds Vouloutsi, V. et al.) 187–199 (Springer, 2018).

  50. Kress, D. & Egelhaaf, M. Head and body stabilization in blowflies walking on differently structured substrates. J. Exp. Biol. 215, 1523–1532 (2012).

    PubMed  Google Scholar 

  51. Demir, M., Kadakia, N., Anderson, H. D., Clark, D. A. & Emonet, T. Walking Drosophila navigate complex plumes using stochastic decisions biased by the timing of odor encounters. Elife 9, e57524 (2020).

    CAS  PubMed  PubMed Central  Google Scholar 

  52. Cowley, B. R. et al. Mapping model units to visual neurons reveals population code for social behaviour. Nature 629, 1100–1108 (2024).

    CAS  PubMed  PubMed Central  Google Scholar 

  53. Ramdya, P. & Ijspeert, A. J. The neuromechanics of animal locomotion: from biology to robotics and back. Sci. Robot. 8, eadg0279 (2023).

    PubMed  Google Scholar 

  54. Tunyasuvunakool, S. et al. dm_control: software and tasks for continuous control. Software Impacts 6, 100022 (2020).

    Google Scholar 

  55. Schneider, D. Insect antennae. Annu. Rev. Entomol. 9, 103–122 (1964).

    Google Scholar 

  56. Suver, M. P., Medina, A. M. & Nagel, K. I. Active antennal movements in Drosophila can tune wind encoding. Curr. Biol. 33, 780–789 (2023).

    CAS  PubMed  PubMed Central  Google Scholar 

  57. Todi, S. V., Sharma, Y. & Eberl, D. F. Anatomical and molecular design of the Drosophila antenna as a flagellar auditory organ. Microsc. Res. Tech. 63, 388–399 (2004).

    PubMed  PubMed Central  Google Scholar 

  58. Günel, S. et al. DeepFly3D, a deep learning-based approach for 3D limb and appendage tracking in tethered, adult Drosophila. eLife 8, e48571 (2019).

    PubMed  PubMed Central  Google Scholar 

  59. Arreguit, J., Ramalingasetty, S. T. & Ijspeert, A. J. FARMS: framework for animal and robot modeling and simulation. Preprint at bioRxiv https://doi.org/10.1101/2023.09.25.559130 (2023).

  60. Ozdil, P. G., Ijspeert, A. & Ramdya, P. Sequential-inverse-kinematics. Zenodo https://doi.org/10.5281/zenodo.12601316 (2024).

  61. Cruse, H. What mechanisms coordinate leg movement in walking arthropods? Trends Neurosci. 13, 15–21 (1990).

    CAS  PubMed  Google Scholar 

  62. Schilling, M., Hoinville, T., Schmitz, J. & Cruse, H. Walknet, a bio-inspired controller for hexapod walking. Biol. Cybern. 107, 397–419 (2013).

    PubMed  PubMed Central  Google Scholar 

  63. Yang, H. H. et al. Fine-grained descending control of steering in walking Drosophila. Cell 187, 1–19 (2023).

    Google Scholar 

  64. Strauss, R. & Heisenberg, M. Coordination of legs during straight walking and turning in Drosophila melanogaster. J. Comp. Physiol. A 167, 403–412 (1990).

    CAS  PubMed  Google Scholar 

  65. Sharkey, C. R., Blanco, J., Leibowitz, M. M., Pinto-Benito, D. & Wardill, T. J. The spectral sensitivity of Drosophila photoreceptors. Sci. Rep. 10, 18242 (2020).

    CAS  PubMed  PubMed Central  Google Scholar 

  66. Rister, J., Desplan, C. & Vasiliauskas, D. Establishing and maintaining gene expression patterns: insights from sensory receptor patterning. Development 140, 493–503 (2013).

    CAS  PubMed  PubMed Central  Google Scholar 

  67. Sancer, G. et al. Modality-specific circuits for skylight orientation in the fly visual system. Curr. Biol. 29, 2812–2825 (2019).

    Google Scholar 

  68. Hindmarsh Sten, T., Li, R., Otopalik, A. & Ruta, V. Sexual arousal gates visual processing during Drosophila courtship. Nature 595, 549–553 (2021).

    CAS  PubMed  PubMed Central  Google Scholar 

  69. Strother, J. A. et al. Behavioral state modulates the ON visual motion pathway of Drosophila. Proc. Natl Acad. Sci. USA 115, E102–E111 (2018).

    CAS  PubMed  Google Scholar 

  70. de Bruyne, M., Clyne, P. J. & Carlson, J. R. Odor coding in a model olfactory organ: the Drosophila maxillary palp. J. Neurosci. 19, 4520–4532 (1999).

    PubMed  PubMed Central  Google Scholar 

  71. Haarnoja, T., Zhou, A., Abbeel, P. & Levine, S. Soft Actor-Critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In Proceedings of the 35th International Conference on Machine Learning (PMLR, 2018).

  72. Holl, P., Thuerey, N. & Koltun, V. Learning to control PDEs with differentiable physics. In International Conference on Learning Representations (2020); https://openreview.net/forum?id=HyeSin4FPB

  73. Koehler, F. Machine learning and simulation. Zenodo https://doi.org/10.5281/zenodo.12793324 (2024).

  74. Raghu, S. V. & Borst, A. Candidate glutamatergic neurons in the visual system of Drosophila. PLoS ONE 6, e19472 (2011).

    CAS  PubMed  PubMed Central  Google Scholar 

  75. Kolodziejczyk, A., Sun, X., Meinertzhagen, I. A. & Nässel, D. R. Glutamate, GABA and acetylcholine signaling components in the lamina of the Drosophila visual system. PLoS ONE 3, e2110 (2008).

    PubMed  PubMed Central  Google Scholar 

  76. Keleş, M. F., Hardcastle, B. J., Städele, C., Xiao, Q. & Frye, M. A. Inhibitory interactions and columnar inputs to an object motion detector in Drosophila. Cell Rep. 30, 2115–2124 (2020).

    Google Scholar 

  77. Shinomiya, K., Nern, A., Meinertzhagen, I. A., Plaza, S. M. & Reiser, M. B. Neuronal circuits integrating visual motion information in Drosophila melanogaster. Curr. Biol. 32, 3529–3544 (2022).

    Google Scholar 

  78. Wang-Chen, S. et al. Data accompanying ‘NeuroMechFly v2, simulating embodied sensorimotor control in adult Drosophila’. The Harvard Dataverse Repository. https://doi.org/10.7910/DVN/3MCEYR (2024).

  79. Wang-Chen, S. et al. Code accompanying ‘NeuroMechFly v2, simulating embodied sensorimotor control in adult Drosophila’. Zenodo https://doi.org/10.5281/zenodo.12973000 (2024).

Download references

Acknowledgements

We thank V. Lobato-Rios for valuable insights and early exploration of visual inputs to the model. We thank J. Arreguit (EPFL, Switzerland), S. T. Ramalingasetty (EPFL, Switzerland) and A. J. Ijspeert (EPFL, Switzerland) for the development of FARMS, which was used to generate the MJCF file of the updated model. We thank J. K. Lappalainen (Tübingen University, Germany; Tübingen AI Center, Germany; Janelia Research Campus, USA), J. H. Macke (Tübingen University, Germany; Tübingen AI Center, Germany; Max Planck Institute for Intelligent Systems, Germany), S. C. Turaga (Janelia Research Campus, USA) and colleagues for making the FlyVision model available before publication. P.R. acknowledges support from a Swiss National Science Foundation (SNSF) Project Grant (175667) and an SNSF Eccellenza Grant (181239). S.W.-C. acknowledges support from a Boehringer Ingelheim Fonds PhD fellowship. P.G.Ö. acknowledges support from a Swiss Government Excellence PhD Scholarship and a Google PhD Fellowship. F.H. acknowledges support from a Boehringer Ingelheim Fonds PhD fellowship.

Author information

Authors and Affiliations

  1. Neuroengineering Laboratory, Brain Mind Institute & Interfaculty Institute of Bioengineering, EPFL, Lausanne, Switzerland

    Sibo Wang-Chen, Victor Alfred Stimpfling, Thomas Ka Chung Lam, Pembe Gizem Özdil, Louise Genoud, Femke Hurtak & Pavan Ramdya

  2. Biorobotics Laboratory, EPFL, Lausanne, Switzerland

    Pembe Gizem Özdil

Authors

  1. Sibo Wang-Chen
  2. Victor Alfred Stimpfling
  3. Thomas Ka Chung Lam
  4. Pembe Gizem Özdil
  5. Louise Genoud
  6. Femke Hurtak
  7. Pavan Ramdya

Contributions

S.W.-C.—conceptualization, methodology, software, formal analysis, investigation, data curation, validation, writing—original draft preparation, writing—review and editing and visualization. V.A.S.—conceptualization, methodology, software, formal analysis, investigation, data curation, validation, writing—original draft preparation, writing—review and editing and visualization. T.K.C.L.—conceptualization, methodology, software, formal analysis, investigation, data curation, validation, writing—review and editing and visualization. P.G.Ö.—conceptualization, methodology, software, investigation, data curation, validation, writing—original draft preparation, visualization and writing—review & editing. L.G.—methodology, software, validation, investigation and writing—review and editing. F.H.—conceptualization, methodology, software, investigation and writing—review and editing. P.R.—conceptualization, methodology, resources, writing—original draft preparation, writing—review and editing, supervision, project administration and funding acquisition.

Corresponding authors

Correspondence to Sibo Wang-Chen or Pavan Ramdya.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Methods thanks Stephane Viollet and the other, anonymous, reviewers for their contribution to the peer review of this work. Primary Handling Editor: Nina Vogt, in collaboration with the Nature Methods team. Peer reviewer reports are available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Improvements to the biomechanical model.

A comparison of the original (left) and updated (right) NeuroMechFly biomechanical model from a (a) zoomed-in view of the head, highlighting antennal DoFs, (b) the side views, and (c) the front views. DoFs are indicated in green. The highlighted differences are: (1) additional DoFs in the antennae, (2) a gap for the neck between the head and the thorax, (3) angles of the thorax and the position of the head relative to it, and (4) the placements of the legs on the thorax.

Extended Data Fig. 2 Preprogrammed stepping based on experimentally recorded data.

Joint kinematics for each leg during preprogrammed stepping. Kinematic patterns derived from behavioral recordings. Time series for each joint are color-coded. ThC: thorax-coxa joint; CTr: coxa-trochanter joint; FTi: femur-tibia joint; TiTa: tibia-tarsus joint. Note the left-right symmetry in roll and yaw DoFs. Periods when adhesion is turned off during swing to facilitate lifting each leg are indicated in light gray; periods when adhesion is on are indicated in dark gray.

Source data

Extended Data Fig. 3 Calibration of vision.

(a) The calibration environment has black pillars spaced regularly around the fly at 15° intervals. Additionally, red, green, and blue pillars are used to indicate the anterior, midline, and posterior field of view (FOV) limits of the left eye. Yellow, magenta, and cyan pillars indicate the FOV limits of the right eye. (b) Each eye has a FOV spanning ~144° horizontally. The two eyes overlap by ~ 17°, resulting in an overall horizontal FOV of ~270°. (c) A raw camera view of what the fly sees in this environment before applying a fisheye effect. Note that by default, the rectilinear camera distorted areas closer to the edges of the FOV to keep the lines straight. (d) A fisheye effect is applied to simulate the roughly spherical arrangement of ommatidia in the fly eye. (e) Retinal inputs are simulated by binning the pixels according to the hexagonal grid of ommatidia and taking the average intensity within each ommatidium. Ommatidia are randomly sensitive to green (yellow-type) and blue (pale-type) channels in a 7:3 ratio.

Extended Data Fig. 4 Trajectories during path integration based on ascending feedback.

Actual (black) and ascending feedback-based (red) estimates of walking trajectories for five trials (rows) and three different insect locomotor gaits (columns). Indicated are starting positions of the paths (black circles). For each trial (row), the fly executes the same sequence of straight walking and turns but with different gaits.

Extended Data Fig. 5 Efficacy of head stabilization as a function of terrain type and ascending signals.

(a-b) The standard deviations of ommatidia readings from the left eye while walking over (a) flat or (b) blocks terrain without or with ascending feedback-based head stabilization. Note the high variability in light intensity near the horizon when the head is not stabilized; this is due to more pronounced self motion of the head. (c-d) Coefficient of determination (r2) between predicted and optimal head roll (blue) and pitch (brown) when performing head stabilization while walking over (c) flat or (d) blocks terrain and using ascending motor feedback from different sets of leg joint angles. ‘ ~ (removed DoF)’ indicates all leg DoFs are used except the removed one (or ones at the same joint); ‘used DoF’ indicates only the indicated DoF (or multiple ones at the same joint) are used. In each case, the same set of DoFs is used in all legs. Note that ground contact information is always used, hence the better-than-chance performance in the cases where no leg DoF is used. Overlaid are box plots indicating the median, upper and lower quartiles, and whiskers extending to the furthest points excluding outliers that are more than 1.5 × the interquartile range (IQR) beyond the IQR. N = 30 for each box; trials where physics simulation failed due to numerical instabilities are excluded.

Source data

Extended Data Fig. 6 Vision model used in the multi-modal navigation task.

(a) Illustration of hexagonal convolution compared to standard matrix convolution. (b) Accuracy of the model in predicting whether the obstacle is present in the fields of view of the fly’s eyes. The reported F1 score is the harmonic mean of the precision and recall. (c-d) Accuracy of the model in predicting the (c) direction and (d) distance of the obstacle from the fly. The angular r2 score is defined as the r2 score of sin (ϑ) concatenated with cos (ϑ) for all samples, where ϑ is the angle. (e-f) Accuracy of the model in predicting the (e) azimuth and (f) size of the obstacle in the retinal images. N = 2,646.

Source data

Extended Data Fig. 7 Performance of the connectome-constrained visual controller in a fly following task.

Using (a) all T1–T5, Tm, and TmY neurons, or (b) T2, T2a, T3, Tm, and TmY neurons-upstream partners of LC9 and LC10 neurons52-to perform fly following either without or with head stabilization while walking over flat or blocks terrain. Shown are 11 trials (color-coded) per case.

Supplementary information

Reporting Summary (download PDF )

Peer Review File (download PDF )

Supplementary Video 1 (download MP4 )

Obtaining 3D poses from an untethered walking fly to model more realistic stepping. Comparison between template aligned (solid lines) and forward kinematics reconstructed (dashed lines) 3D poses (left). Legs are color-coded. Recording of an untethered fly walking straight through a corridor as seen from three viewpoints (right). Note that the center panel of the fly recording shows the ventral view; therefore, the legs closer to the top of the screen are on the left side of the fly. Video was recorded at 360 fps, downsampled to 120 fps, and displayed at 10% speed (that is, 12 fps). Overlaid are manually annotated thoracic, antennal, head, abdominal and leg key points for 36 frames. Data used for preprogrammed steps are indicated (red lines).

Supplementary Video 2 (download MP4 )

Preprogrammed stepping pattern of each leg. Individual legs are stepped in series according to their 3D pose estimation-derived joint kinematics. Simulation is played back at 0.05 times the real speed.

Supplementary Video 3 (download MP4 )

Ground reaction forces during locomotion with a CPG-based controller. Here and in all subsequent videos, the tarsi are color coded: natural leg color indicates that adhesion is off; dark blue indicates that adhesion is on and the tarsus is in contact with the ground; red indicates that adhesion is on but the tarsus is not in contact with the ground. In this particular video, because the terrain is flat, the third case rarely occurs. Simulation is played back at 0.05 times the real speed.

Supplementary Video 4 (download MP4 )

Locomotion over sloped, vertical and inverted terrain using leg adhesion. Locomotion is driven by a CPG-based controller. Shown are simulations without (left) or with (right) leg adhesion. Indicated is the slope of the terrain (top). Simulation is played back at 0.1 times the real speed.

Supplementary Video 5 (download MP4 )

Control signals of the CPG-based controller. Shown for all legs are the CPG phases (wrapped by 2π) and amplitudes from random initializations. As CPGs synchronize, they generate a tripod gait. Simulation is played back at 0.1 times the real speed.

Supplementary Video 6 (download MP4 )

Control signals of the rule-based controller. Shown for all legs are the stepping scores and contributions of each of the three coordination rules. Indicated is the initiation of steps (triangles). Simulation is played back at 0.1 times the real speed.

Supplementary Video 7 (download MP4 )

Control signals of the hybrid controller. Shown for all legs are the CPG phases (wrapped by 2π) and amplitudes as well as the activation of the overstretch (solid) and stumbling (dashed) rules based on sensory feedback. The tibia is colored pink when the overstretch rule is active and light blue when the stumbling rule is active. Simulation is played back at 0.1 times the real speed.

Supplementary Video 8 (download MP4 )

Locomotion over multiple terrain types. The fly walks over a flat surface (first column), a surface with gaps (second column), a surface with blocks (third column) and a mixed surface (fourth column). The fly is controlled by a CPG-based controller (top), a rule-based controller (middle) or a hybrid controller, which integrates both CPGs and sensory feedback rules (bottom). Shown are the results from 20 trials for each condition. For the hybrid controller, the tibia is colored pink when the overstretch rule is active and light blue when the stumbling rule is active. Simulation is played back at 0.1 times the real speed.

Supplementary Video 9 (download MP4 )

Visual object tracking task. The fly uses vision to follow a black sphere that is moving away along an S-shaped trajectory. Shown are raw visual signals from the left and right eyes (bottom). A hybrid controller with leg adhesion is used for locomotion. Note that each eye’s field of view can observe front leg movements. Simulation is played back at 0.5 times the real speed.

Supplementary Video 10 (download MP4 )

Olfactory chemotaxis task. The fly seeks an attractive odor source (orange) while avoiding two aversive odor sources (blue). Colored bars (bottom) indicate the intensities of attractive (orange) and aversive (blue) odors sensed by antennae on each side of the head. A hybrid controller with leg adhesion is used for locomotion. Simulation is played back at 0.5 times the real speed.

Supplementary Video 11 (download MP4 )

Head stabilization using ascending motor feedback. Shown are an overhead view of the fly, a zoomed-in view of head movements, the raw left eye ommatidia signals and time series of neck actuation (roll and pitch) in the absence or presence of head stabilization. Neck actuation signals are either optimal (based on inverting thoracic pitch and roll, dashed lines) or predicted based on ascending motor feedback signals from the legs (solid lines). The first half of the video is during walking over flat terrain. The second half is during walking over blocks terrain. Simulation is played back at 0.2 times the real speed.

Supplementary Video 12 (download MP4 )

Neural controller for multimodal navigation trained through reinforcement learning. The fly seeks an attractive odor source (orange) while using vision to avoid an obstacle (black pillar) over rugged mixed terrain. Shown are visual inputs to the left and right eyes (bottom center). Orange bars (bottom left and bottom right) indicate the intensities of an attractive (orange) odor sensed by antennae on each side of the head. Locomotion is regulated using a hybrid controller with leg adhesion. Simulation is played back at 0.2 times the real speed. Nine trials beginning from different spawn locations are shown sequentially.

Supplementary Video 13 (download MP4 )

Navigating a complex plume using a bio-inspired odor-taxis algorithm. The fly model uses a Drosophila plume navigation algorithm51 to reach the odor source (left). Shown are the fly’s current state (for example, pause, turn, walk forward) and algorithm parameter values (bottom-left). Red bar (bottom-left) indicates the intensity of odor detection. Also shown are the trajectory of the fly (red) and a zoomed-in birds-eye view of the fly (bottom right). Simulation is played back at 0.5 times the real speed.

Supplementary Video 14 (download MP4 )

Following another fly using a connectome-constrained vision network. A ‘following’ fly model controlled by a connectome-constrained visual system neural network31 ascending motor feedback-based gaze stabilization, descending steering and a hybrid locomotor controller with leg adhesion is tasked to follow a ‘leading’ fly model across blocks terrain. Shown are an overhead view of both models’ trajectories (top row, middle column), raw ommatidia readings from the left and right retinas (top row, first and last columns), object detection scores obtained by processing visual neuron outputs (top row, second and fourth columns) and the spatial activities of Drosophila neurons processing visual signals from each eye (bottom). Neural polarization is color coded from most hyperpolarized (blue) to most depolarized (red). Simulation is played back at 0.2 times the real speed.

Source data

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang-Chen, S., Stimpfling, V.A., Lam, T.K.C. et al. NeuroMechFly v2: simulating embodied sensorimotor control in adult Drosophila. Nat Methods 21, 2353–2362 (2024). https://doi.org/10.1038/s41592-024-02497-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Version of record:

  • Issue date:

  • DOI: https://doi.org/10.1038/s41592-024-02497-y