Activity Monitor for Patients on Supplemental Oxygen

For my Microcomputer Applications class at Arizona State University we were tasked with designing and building a medical device that utilized some sort of microprocessor.

I had heard about a friend on supplemental oxygen who often had difficulty remembering to adjust her oxygen supply to match her level of exertion. Essentially, when she transitioned form sitting to standing/walking without increasing her oxygen flow rate, her oxygen consumption increased (due to more exertion), but her supply did not. This resulted in oxygen deprivation and a high risk of syncope. Likewise, if she transitioned from standing/walking to sitting without reducing her oxygen flow rate, she could become hyperoxic, which is also dangerous. As such, I decided to build a device that would help her to remember to adjust her oxygen supply.

I used an Arduino Nano, MPU6050, MaxBotix Ultrasonic rangefinder, HC-05 Bluetooth transceiver (for debugging), broken ear bud, a momentary switch, a regular switch, protoboard, and a project box from Fry’s Electronics.

The idea was to use the MPU6050 in combination with the rangefinder to determine if the person wearing the device was sitting or standing/walking by measuring both activity (movement) and the distance to the ground (the device was intended to be worn on the waist). Doing this with only an inertial measurement unit (the MPU6050 for example) would be extremely difficult as there is not much information to work with (for example, how can you tell the difference between sitting motionless and standing motionless with inertial information only?). Hence the choice to use the rangefinder to measure the distance to the ground. The device functioned by playing a series of tones when a transition in activity was detected (sitting to standing, or standing to sitting). The series of tones ascended in frequency to signal a transition from sitting to standing (which would require an increase in oxygen flow rate), and descended in frequency to signal a transition from standing to sitting (which would require a decrease in oxygen flow rate). This alarm (which played through my broken ear bud) could be dismissed by the user by pressing a momentary switch. The following figures provide further information.

The device worked quite well after several problems were addressed. First, the initial position of the device was too low on the belt, resulting in the range finder being too close the the target (the chair) when the user was sitting. This resulting in failed range measurements. Second, if the device was angled when the user was sitting, aliasing occurred (where sound reflected off of the chair, then off of the ceiling or wall, then returned to the sensor, resulting in a long range measurement > 100in occasionally). This was corrected by ensuring that the device pointed squarely at the target. A future modification would be to mount the device on a swivel mount so that it was always pointing straight down. With these two problems corrected, the device performed with 95.84% accuracy during controlled testing. However, the device has a fatal flaw. If the chair that the user is sitting on is not wide enough to enter the field of view of the ultrasonic rangefinder, the rangefinder will not see the chair, and likely measure the distance to the ground. Hence, the device will think that the user is still standing, especially if the chair is high off the ground. Despite this flaw, the device was well-received for its advanced construction, functionality, user friendliness (comparatively) and interesting application. The code can be found here (I had to switch from using Dropbox to host my downloads as they are ending the download link service. I have switched to Mega, which seems to work really well. I had some trouble using the Mega links with Safari, but Chrome seems to work).

flow-cart-2
Figure 1: Block diagram of device function, encompassing hardware (including communication protocols), processing of data in software, and communication with user. bme370_schem
Figure 2: Wiring diagram of the device components (not including the ear bud, which acted as the speaker).  picture1
Figure 3: Device enclosure (black) with power switch and ultrasonic rangefinder (round with slits) top left on. Mental belt clip is shown beneath the device. picture2
Figure 4: Side view of device enclosure (black) with power switch and labels. Shown holding belt clip.picture3
Figure 5: Device enclosure (black) showing speaker (white) and dismissal button (black, top left).picture4
Figure 6: Device internals and lid. MPU-6050 is the component with the yellow light. Rangefinder shown top left. Bluetooth transceiver is top right. Arduino Nano is center. Speaker and button on bottom.picture5
Figure 7: Modular constructional allows most components to be easily unplugged/detached from main body and protoboard.  picture6
Figure 8: Bottom of protoboard shows wire management.
picture8
Figure 9: The device is mounted high on the belt, maintaining a measurable distance from the chair.

 

Workshop in Technology for Sense-Expansion

Last week I had the pleasure of attending a 5-day workshop at Arizona State University titled simply “Workshop in Technology for Sense-Expansion.” The workshop was hosted by Dr. Frank Wilczek (recipient of the Nobel Prize in physics in 2004). The workshop was intended to give an introduction to human sensory biomechanics/molecular mechanics, and describe the two primary forms of sensory information that we use (sight and hearing). We discussed the nature of these two signals and how they can be used to sensory expansion (that is, the ability to see or hear more than we can currently). The basic idea is that sight and hearing can be combined to encode information that is usually not available. For example, if the UV spectrum were mapped to a range of audible frequencies, we could then “see” UV light. The same is true for IR, or any other electromagnetic radiation that our eyes to not detect. In the workshop we were introduced to several forms of image processing using Python. In particular, we utilized a technique called “temporal image processing” (TIP) to elucidate differences in images that are difficult to discern otherwise (starting with RGB images). We then applied this to hyperspectral data acquired from the PARC hyperspectral camera (currently under development, but Dr. Wilczek had a prototype for use to work with). The PARC hyperspectral camera is a cutting-edge digital imaging device that captures a wide range of frequencies (rather than just red, green, and blue). For example, a normal RGB image data contains height x width x 3 (or 4 if alpha is included) dimensions, corresponding to the image resolution (height x width) and the 3 colors (RGB – 3). Sometimes a 4th layer is included which contains alpha scaling values. The hyperspectral camera image data, however, was height x width x 148! This means that there are 148 different frequency spectra acquired. We used the hyperspectral camera to take pictures of a variety of objects, using “iLuminator” boxes that allowed us to control the frequency of light illuminating the object (pictured below). We also used Arduino to construct “synesthesia gloves” that allow the wearer to hold their hand over a color, and receive both auditory and visual information describing the color sampled (also pictured below). Overall, the workshop was a wonderful experience. I learned a lot of useful and interesting tools in Python, and had a great time working with the Arduinos and the hyperspectral camera. I even wrote a simple GUI to easily control the illumination setting in the iLuminator boxes. This workshop was a preliminary run of a course that Dr. Wilczek hopes to make available to everyone on the web.

The “iLuminator” box, which contains strips of red, green, blue, amber, pink and UV LEDs, driven by an Arduino and 5 FemtoBuck LED drivers. The LED intensities are controlled by a simple GUI application I wrote in Python. The front of the box has a cover, resulting in a space illuminated only by the overhead LEDs.
img_0726

Inside the iLuminator.img_0727
The Arduino micro and FemtoBuck LED drivers.img_0728
My simple Python GUI to control LED intensity. The program allows a serial port to be selected and connected to, and provides sliders and toggles to control LED intensity (as well as spin boxes to manually sent intensity and read off current intensity settings). It is written in PyQt4, and uses PySerial to manage the USB connection. You can download the code here (I’ve had to switch to using Mega to host my files now that Dropbox is ending their link service. I had some trouble using the download links with Safari, but Chrome seems to work fine).
python-gui
The top of the “synesthesia glove,” showing the 3V Pro Trinket, Pro Trinket Lipo backpack (and associated Lipo), SoundBoard, and NeoPixel Jewel, all from Adafruit.img_0729
The bottom of the “synesthesia glove,” showing the RGB and UV sensors from Adafruit, which were used to measure the light (color) reflected from objects that the glove was placed against. img_0730
An early setup of the PARC hyperspectral camera pointing at some geological compounds that fluoresce (fluorite and uranium, to name a few). The camera was later placed in an iLuminator to control illumination frequencies and intensities. img_0731
The previously mentioned objects close up and illuminated with UV light.img_0732
Dr. Wilczek and students working with the camera. My iLuminator box is bottom left. The camera was placed inside the box, and we captured images of many objects under different light frequencies.img_0733
An interesting “pin hole” camera effect from the spatial arrangement of the LEDs inside the box.
img_0735

Balancing Robot (Arduino)

PLEASE NOTE: This project is in its infant stage. The code has little to no documentation and I have not uploaded the CAD files for the frame.

Over Winter break (2013), my friend David Ingraham and I built a balancing robot. It operates on the same principles as the popular Segway two wheeled transportation system. That is, it accelerates and decelerates the wheels in order to keep the entire system upright.  We used a 3D printed frame along with some components I had lying around. The code incorporates a PID controller and a Kalman filter. The PID controller code used was from the Arduino PID Library and the Kalman filter was based off this project. The primary purpose of the project was to learn how to write and use a Kalman filter. The code is still under development, and we will be upgrading the drivetrain soon brushed DC motors and encoders). See the video at the bottom of the post for more information.

The hardware:

1 x Arduino Mega 2560

1 x USB host shield (for communication with a PS3 controller. Is not necessary. For information see this post)

1 x Bluetooth dongle (for communication with a PS3 controller. Is not necessary. For information see this post)

2 x Parallax continuous servos

1 x Dual Use Gyro and Accelerometer Sensor Board (This consists of an ADXL345 3-axis accelerometer (get it at Sparkfun) and a ADW2207 analog gyroscope, both of which can be purchased separately. Any analog gyroscope can be substituted for the ADW2207, but new code would have to be written (or found) in order to use a different accelerometer).

1 x Bluetooth transceiver (for telemetry)

1 x 2S (2 cell) lithium polymer battery

1 x Lipo battery low voltage alarm

3 x Potentiometers (for tuning PID controller)

1 x Frame of your choice. Should be relatively tall.

1 x Assortment of zip-ties

1 x Assortment of wires and jumpers

1 x Wheels of your choice. Should have a diameter of approximately 7 inches.

Frame with servos and electronics:2013-12-30 11.19.43

Holes in frame allow zip-ties to hold servos in place. Accelerometer and gyroscope are mounted between the servos:
2014-03-10 16.24.21

Arduino and shield close-up:
2014-03-10 16.24.07

Battery and tuning potentiometers:
2014-03-10 16.23.57

Power harness:
2014-03-10 16.25.58

Completed:
2014-03-10 16.23.51

The code:

As zip file containing the Arduino and Processing code can be found here.

 

Caster Bot

See this post about the USB host shield and Bluetooth dongle.

Parts List:

1 x OSEPP Mega 2560 R3 Plus (A regular Arduino Mega 2560 or UNO will work fine)

1 x USB Host Shield (Or the Sparkfun version)

1 x USB Bluetooth dongle (See this post about which dongles work and which don’t)

1 x Serial Bluetooth transceiver module (Or a more expensive version from Sparkfun)

2 x Parallax Continuous Servos

1 x Small breadboard

1 x 102dB Piezo Siren

1 x 2S (two cell) Lithium Polymer battery

1 x Lithium Polymer battery low voltage alarm

1 x MB1010 LV-MaxSonar®-EZ1™

1 x Caster wheel

2 x Drive wheels (material of your choice. Pre-made wheels can be found online).

1 x High voltage LED of your choice

1 x Frame (can be made out of plastic, wood or metal. You choose).

Code:

Here is a link to the code I am running. Note that you will need to install the USB Host Shield library as explained in this post in order to compile this code. I am running this code on an Ardunio Mega 2560, so if you choose to use an Arduino UNO some modifications will have to be made. First, you will have to remove all instances of Serial1.print() or Serial1.println() as the UNO only has one Serial port (doing so will not effect the operation of the code in any way). Second, because the USB host shield library takes up so much memory, you may have issues, especially with older versions of the UNO. You may have to remove parts of the code to keep the size down. Finally, I use one of the pins on the Mega that does not exist on the UNO, so you will have to reassign that pin in Def.h (it is pin 23, which I am using to control the MOSFET that runs the high voltage LED).