Imagine controlling an aircraft with your thoughts - e.g. a jetpack that operates based upon thoughts like in the movie Iron Man, offering 360° visual passthrough and real-time sensory feedback. Now imagine this is all possible in real life - using a brain-computer interface (BCI) headset that reads your brain signals and translates them into flight commands.
Isn't it too risky to directly interface BCI data to flight controls?
Of course - you might think that directly interfacing our motor imagery data to flight controls would be too dangerous or risky to noise or other artifacts. First of all, our AI-trained model is designed to filter out noise and artifacts, ensuring that only the most relevant signals are used for control. Second, we are not directly interfacing with flight controls. Instead, we are using a large action model (LAM) to interpret the BCI data and issue commands to the drone. By introducing a middle layer - we like to call it a Fly-by-large action model (LAM) system (fly-by-wire system 2.0!) - that handles all the flight controls between the aircraft and the BCI input - we introduce a safe and reliable way to control the aircraft using BCI data.
What will the Large Action Model (LAM) do?
Large action models (LAMs) are AI models designed to understand human intentions and translate them into actions within a given environment or system. Instead of directly controlling the aircraft using motor imagery data, the LAM will treat the BCI input as a "prompt" and will determine best course of action based on the BCI input. In the situation of emergency, the LAM will issue commands to the aircraft to stabilize it and prevent crashes. In normal flight, the LAM will interpret the BCI input and issue commands to the aircraft to perform desired actions. Our goal is to bring SpaceX's Crew Dragon-level automation to the aircraft itself, but if manual control is needed, our Brain-Computer Interface will help pilots "prompt" their desired actions to the flight computer.
3D Head Scan for Custom Helmet Design
One of the key components of a successful non-invasive BCI system is to make it fit the user perfectly - every probe on the BCI headset must be in contact with the user's scalp to get accurate readings. To achieve this, we are developing a 3D head scan system that will allow us to create a custom-fit helmet for each user. Our first demo app was possible thanks to the iPhone's TrueDepth sensor, which allows us to create a 3D model of the user's head in real-time. Our technical founder John has developed an iOS app with a C++ library that can stich together the point cloud data from iPhone's front-facing sensors to create a complete 3D model of the user's head.
Early access of Orchestr 3D Face & Head Scan is available for iPhone users:
You can simply scan your head using an iPhone with LiDAR and export the 3D model to OBJ format and check dimensions like interpupillary distance and head circumference (coming soon).
Founder's Background
John was in flight training to get an IFR rating on a Cessna 172 (single-engine plane), and has a background in computer science at UC Irvine, specializing in machine learning and computer vision. He is also a drone pilot with experience in FPV racing and building custom drones. His goal is to revolutionize how pilots interact with their aircraft, making it more intuitive and efficient.