Activity Monitor for Patients on Supplemental Oxygen

For my Microcomputer Applications class at Arizona State University we were tasked with designing and building a medical device that utilized some sort of microprocessor.

I had heard about a friend on supplemental oxygen who often had difficulty remembering to adjust her oxygen supply to match her level of exertion. Essentially, when she transitioned form sitting to standing/walking without increasing her oxygen flow rate, her oxygen consumption increased (due to more exertion), but her supply did not. This resulted in oxygen deprivation and a high risk of syncope. Likewise, if she transitioned from standing/walking to sitting without reducing her oxygen flow rate, she could become hyperoxic, which is also dangerous. As such, I decided to build a device that would help her to remember to adjust her oxygen supply.

I used an Arduino Nano, MPU6050, MaxBotix Ultrasonic rangefinder, HC-05 Bluetooth transceiver (for debugging), broken ear bud, a momentary switch, a regular switch, protoboard, and a project box from Fry’s Electronics.

The idea was to use the MPU6050 in combination with the rangefinder to determine if the person wearing the device was sitting or standing/walking by measuring both activity (movement) and the distance to the ground (the device was intended to be worn on the waist). Doing this with only an inertial measurement unit (the MPU6050 for example) would be extremely difficult as there is not much information to work with (for example, how can you tell the difference between sitting motionless and standing motionless with inertial information only?). Hence the choice to use the rangefinder to measure the distance to the ground. The device functioned by playing a series of tones when a transition in activity was detected (sitting to standing, or standing to sitting). The series of tones ascended in frequency to signal a transition from sitting to standing (which would require an increase in oxygen flow rate), and descended in frequency to signal a transition from standing to sitting (which would require a decrease in oxygen flow rate). This alarm (which played through my broken ear bud) could be dismissed by the user by pressing a momentary switch. The following figures provide further information.

The device worked quite well after several problems were addressed. First, the initial position of the device was too low on the belt, resulting in the range finder being too close the the target (the chair) when the user was sitting. This resulting in failed range measurements. Second, if the device was angled when the user was sitting, aliasing occurred (where sound reflected off of the chair, then off of the ceiling or wall, then returned to the sensor, resulting in a long range measurement > 100in occasionally). This was corrected by ensuring that the device pointed squarely at the target. A future modification would be to mount the device on a swivel mount so that it was always pointing straight down. With these two problems corrected, the device performed with 95.84% accuracy during controlled testing. However, the device has a fatal flaw. If the chair that the user is sitting on is not wide enough to enter the field of view of the ultrasonic rangefinder, the rangefinder will not see the chair, and likely measure the distance to the ground. Hence, the device will think that the user is still standing, especially if the chair is high off the ground. Despite this flaw, the device was well-received for its advanced construction, functionality, user friendliness (comparatively) and interesting application. The code can be found here (I had to switch from using Dropbox to host my downloads as they are ending the download link service. I have switched to Mega, which seems to work really well. I had some trouble using the Mega links with Safari, but Chrome seems to work).

flow-cart-2
Figure 1: Block diagram of device function, encompassing hardware (including communication protocols), processing of data in software, and communication with user. bme370_schem
Figure 2: Wiring diagram of the device components (not including the ear bud, which acted as the speaker).  picture1
Figure 3: Device enclosure (black) with power switch and ultrasonic rangefinder (round with slits) top left on. Mental belt clip is shown beneath the device. picture2
Figure 4: Side view of device enclosure (black) with power switch and labels. Shown holding belt clip.picture3
Figure 5: Device enclosure (black) showing speaker (white) and dismissal button (black, top left).picture4
Figure 6: Device internals and lid. MPU-6050 is the component with the yellow light. Rangefinder shown top left. Bluetooth transceiver is top right. Arduino Nano is center. Speaker and button on bottom.picture5
Figure 7: Modular constructional allows most components to be easily unplugged/detached from main body and protoboard.  picture6
Figure 8: Bottom of protoboard shows wire management.
picture8
Figure 9: The device is mounted high on the belt, maintaining a measurable distance from the chair.

 

Workshop in Technology for Sense-Expansion

Last week I had the pleasure of attending a 5-day workshop at Arizona State University titled simply “Workshop in Technology for Sense-Expansion.” The workshop was hosted by Dr. Frank Wilczek (recipient of the Nobel Prize in physics in 2004). The workshop was intended to give an introduction to human sensory biomechanics/molecular mechanics, and describe the two primary forms of sensory information that we use (sight and hearing). We discussed the nature of these two signals and how they can be used to sensory expansion (that is, the ability to see or hear more than we can currently). The basic idea is that sight and hearing can be combined to encode information that is usually not available. For example, if the UV spectrum were mapped to a range of audible frequencies, we could then “see” UV light. The same is true for IR, or any other electromagnetic radiation that our eyes to not detect. In the workshop we were introduced to several forms of image processing using Python. In particular, we utilized a technique called “temporal image processing” (TIP) to elucidate differences in images that are difficult to discern otherwise (starting with RGB images). We then applied this to hyperspectral data acquired from the PARC hyperspectral camera (currently under development, but Dr. Wilczek had a prototype for use to work with). The PARC hyperspectral camera is a cutting-edge digital imaging device that captures a wide range of frequencies (rather than just red, green, and blue). For example, a normal RGB image data contains height x width x 3 (or 4 if alpha is included) dimensions, corresponding to the image resolution (height x width) and the 3 colors (RGB – 3). Sometimes a 4th layer is included which contains alpha scaling values. The hyperspectral camera image data, however, was height x width x 148! This means that there are 148 different frequency spectra acquired. We used the hyperspectral camera to take pictures of a variety of objects, using “iLuminator” boxes that allowed us to control the frequency of light illuminating the object (pictured below). We also used Arduino to construct “synesthesia gloves” that allow the wearer to hold their hand over a color, and receive both auditory and visual information describing the color sampled (also pictured below). Overall, the workshop was a wonderful experience. I learned a lot of useful and interesting tools in Python, and had a great time working with the Arduinos and the hyperspectral camera. I even wrote a simple GUI to easily control the illumination setting in the iLuminator boxes. This workshop was a preliminary run of a course that Dr. Wilczek hopes to make available to everyone on the web.

The “iLuminator” box, which contains strips of red, green, blue, amber, pink and UV LEDs, driven by an Arduino and 5 FemtoBuck LED drivers. The LED intensities are controlled by a simple GUI application I wrote in Python. The front of the box has a cover, resulting in a space illuminated only by the overhead LEDs.
img_0726

Inside the iLuminator.img_0727
The Arduino micro and FemtoBuck LED drivers.img_0728
My simple Python GUI to control LED intensity. The program allows a serial port to be selected and connected to, and provides sliders and toggles to control LED intensity (as well as spin boxes to manually sent intensity and read off current intensity settings). It is written in PyQt4, and uses PySerial to manage the USB connection. You can download the code here (I’ve had to switch to using Mega to host my files now that Dropbox is ending their link service. I had some trouble using the download links with Safari, but Chrome seems to work fine).
python-gui
The top of the “synesthesia glove,” showing the 3V Pro Trinket, Pro Trinket Lipo backpack (and associated Lipo), SoundBoard, and NeoPixel Jewel, all from Adafruit.img_0729
The bottom of the “synesthesia glove,” showing the RGB and UV sensors from Adafruit, which were used to measure the light (color) reflected from objects that the glove was placed against. img_0730
An early setup of the PARC hyperspectral camera pointing at some geological compounds that fluoresce (fluorite and uranium, to name a few). The camera was later placed in an iLuminator to control illumination frequencies and intensities. img_0731
The previously mentioned objects close up and illuminated with UV light.img_0732
Dr. Wilczek and students working with the camera. My iLuminator box is bottom left. The camera was placed inside the box, and we captured images of many objects under different light frequencies.img_0733
An interesting “pin hole” camera effect from the spatial arrangement of the LEDs inside the box.
img_0735