Category Archives: Software Engineering

C++ Reading Group

After a protracted journey through the second edition of Nicolai Josuttis’s The C++ Standard Library, my work group has started reading O’Reilly’s Intel Threading Building Blocks book (written by Intel’s James Reinders). I’m under no impressions that it will be perfect, and we will be reading the errata along with the book. Furthermore, a lot has changed since 2007 when the book was published and the first version of the library was released. Nevertheless, we use this high-level library in enough of our image processing code that it behooves us to know more about it. Plus, we’ve already planned to augment our reading with an examination of our own use of and extensions to TBB, as well as some of the basics of concurrency. I’m hopeful that the nine-or-so of us in the group get some mileage out of it.

At just twelve chapters and fewer than 300 pages, we should be done m-u-c-h sooner than with the Josuttis book. What to read after that? It will be a group decision, but I’m really hoping for A Tour of C++ by Bjarne Stroustrup.

In a post on the C++ ISO standard body’s blog, Stroustrup tells why he wrote this book when he already has written four editions of the more definitive C++ Programming Language.

It gradually dawned on me that [while preparing introductory slides for a graduate course] I just might have produced a solution to a decades-old problem for C++:

What is the basic knowledge that we should be able to assume from a competent C++ programmer?

Competent C programmers can be assumed to know roughly what is covered by K&R. Conversely, if they don’t—or haven’t even heard of K&R—it is a good guess that they can’t be relied on to contribute viable C code. I find that I cannot make an equivalent statement about C++ programmers. . . . We—the huge and diverse C++ communities—do not share a body of basic understanding. This is bad; very bad! We don’t have a shared view of what good C++ code is and we don’t communicate effectively.

Having such a shared view and being able to communicate about it seems like a good thing for my team, which contains a mix of backgrounds and expertise. I know I’ll learn some good stuff from reading it.

Posted in C, Software Engineering | Leave a comment

All Together Now: Integrating the Diabetes App

Hey, y’all. Sorry I’ve been mostly absent recently. I’ve been making stuff, and it feels pretty good. I don’t get to code quite as much as I used to at work, and this (*ahem*) little project has been giving me the outlet I need.

I started building an app to integrate all of my diabetes and exercise data right after my last endocrinology appointment, when I decided I finally wanted to do something about (1) the abysmal lack of context in the data I share with my doctor, despite having so much of it automatically logged for me, and (2) the highs and lows that I see associated with exercise. I’ve had a bad track record with personal software development projects, but I was able to get some good traction with this one. After some early brainstorming and design sessions, a couple of productive plane rides, numerous conversations with a few coworkers, one Boston Quantified Self meet-up, and many evenings of hacking/coding/fixing/designing/refactoring/playing—including not a few where I stayed up way past “pumpkin time”—I feel like I have something I can actually use to start making real conclusions. Of course I’ve been using the data throughout the project, leading to a few “Ah ha!” moments, but the last bit of integration makes it so easy to really see what’s going on.

Here’s what I’ve got. (Click for a larger image.)

You’re looking at four windows in my app. There will probably be more, but this is all I can really wrangle now. Soon I will figure out how to use the awesome, undocumented stuff under development at work and put these parts into a more polished application. (This project is really helping me know more about the bleeding-edge parts of MATLAB. While I’ve worked with it more or less every day for the last fifteen years, I never really had the chance of use many of its newer features.) Here’s are description of the four parts.

(1) The upper left-hand window is a data browser, focusing on the highest level details for each day. This should look pretty familiar to most of us with diabetes. This part and the next one are the most quantitative, and I’m trying to work more contextual, qualitative information into the other pieces of the app. Clicking on any cell in the table changes what is shown in the other windows, and it’s possible to compare multiple days at once.

(2) The lower left window presents more details for the selected days: minimum, maximum, average, and standard deviation for both blood glucose readings and CGM data; the estimated number of carbs I ate (including the ones, like glucose tablets and food during exercise, that I didn’t bolus for) and the amount of insulin I dosed; and the amount of exercise. I have also thrown in a couple of computed values: my average blood glucose divided by the number of carbs and my effective carb:insulin ratio for the day. They might be useful as I look through the data. Or maybe they won’t. We shall see.

(3) The graphs in the upper right are the big picture view of the data. If you’re a visual person (like me) and want to see how the parts fit together, this is it. It’s all there: sensor and BG data, carbs, insulin, temporary and regular basal rates, . . . even the estimated amount of insulin concentration in my bloodstream waiting to be used. The bright blue boxes are exercise. They’re not the most beautiful graphs (yet), but I’ve already managed to see some trends in them.

(4) While looking for these trends, I found myself always going back to the raw data to see exactly what that number was or exactly when that other thing happened. So I made the fourth window—the one in the lower right—which contains the actual events logged by all of my devices. It’s the rawest form of the data that I’ve exposed in the app. (There’s even more “raw” data stored in what I’m calling “the timeline,” or database, but it’s so hard to use as-is. That’s why I wrote the app, eh?)

And that’s the big picture of what I’ve made.

Oh, I guess there’s also this other stuff, just in case you think I’ve been slacking:

  • There’s a nice box plot tool that can be configured to show averages, distributions, and other statistical goodies for different times of day. When I threw enough data at it, it became pretty clear where the most challenging part of my day are (blood sugar wise, at least).
  • It integrates with Dropbox, so I can use it wherever I have both MATLAB and Dropbox installed. The data is always sync’ed up.
  • It has one of the fastest .fit file parsers around. :-)
  • There’s a good deal of date-oriented subsetting functionality. Maybe I want to look at what happened in the last 90 days since my endo appointment. Perhaps I just want to examine weekdays . . . or weekends. I might only be interested in seeing the five hours before and two hours after exercise. Easy peasy.
  • It’s possible to look for and report specific kinds of events. Only want to see CGM or pump alarms? Okay. Want to see all the details about swims lasting longer than 30 minutes. No problem.
  • I liked the free-form nature of Microsoft Outlook’s date/time entry so much that I implemented the same thing for myself. I like saying “yesterday” or “tomorrow” when talking to MATLAB’s datenum function.

There’s more stuff, but they are mostly building blocks.

Now the really exciting work begins to automate the search for trends. . . . finding those insulin needles in the haystack, if you will. I hope to be able to answer questions like these soon: What is happening on days when I climb wicked high in the afternoon? What happens on days when I don’t? How much does my blood glucose typically drop when I ride my bike in the afternoon? What did I do on afternoons when my blood sugar was really stable while I was out running or cycling? What is my actual carb:insulin ratio on days where my blood glucose is well-managed? How does my overnight basal rate look?

I’ll let you know how the data mining turns out!

p.s. — Yes, I’m thinking about how to share this functionality with the whole world (or at the very least the MATLAB-speaking part). But before I do that, I need to ruminate much, much, much more about what it means to provide the building blocks for medical analytics to people (like me) who have to make most/all of our medical decisions without much decision-support tooling and yet are expected to get it right most of the time. It’s not something I take lightly.

Posted in Data-betes, Diabetes, Software Engineering | 7 Comments

The Frustrated Software Architect

Simon Brown gave an interesting lecture on “Agile Architecture,” entitled The Frustrated Architect at GOTOCon. Here are some notes and reactions:

  • Whatever methodology you use, aim for the best of flat, self-organizing teams. Make sure to mind the inter-team gaps. That’s where architecture helps.
  • My team could probably use UML more for modeling the architecture (the “what” and “how” of interactions between our features) more. Color-coding different class/component roles adds another dimension of description.
  • Do class-responsibility-collaboration (CRC) modeling for architecture. Using Post-Its makes it “Agile.”
  • “Coding the Architecture” = Win. “Project Managing the Architecture” = Fail.
  • Focus on functional/nonfunctional requirements, constraints, and operating principles.
  • Use “paper prototype”-like activities in small groups to create effective sketches of the architecture. Then ask “Would you code it that way?” If the answer is no, redo/fix it.
  • The architecture should highlight both the domain (e.g., accounting) and software design (e.g., GUIs, model-view-control, etc.).
  • Base the architecture on requirements and prove the architecture works via experiments.
  • Architecture is what is difficult to refactor in an afternoon (or week). It’s costly to change, complex, risky, and (often) novel.
  • Do “just enough” design. How do the significant elements work together, mitigate key risks, provide foundation to move forward.
  • Just document what the code doesn’t describe.
  • We need our technical mentors to keep teaching as they keep moving up the corporate ladder.

Dear Diabetes Blog Week readers, we’ll be back to our normal programming tomorrow. Stick around. (I don’t post techie stuff like this very often.)

Posted in From the Yellow Notepad, Software Engineering | 2 Comments

Things You Should Be Reading – Pastries, Cats, and C++ Edition

It’s time to clear out some of the backlog of interesting things I’ve read or seen recently. Enjoy!

What are you reading? Leave a link or two in the comments. (Any more than two and you’ll get trapped in the spam trap. Sorry.)

Posted in Diabetes, Hoarding, Reluctant Triathlete, Software Engineering | 1 Comment

Welcome to Herb Sutter’s Jungle

In an effort to keep posting something here until I’m in the right place mentally to write about things that probably interest you, my dear friends, family, and online diabetes peeps, here’s another computing performance excerpt and link. (Working on this stuff is the 9-5 part of your favorite international playboy’s life.)

A half-decade after Herb Sutter wrote that the “free lunch” of Moore’s Law is over, he’s back with his prophet’s wisdom about where we’re going in his January Dr. Dobbs article, “Welcome to the Jungle”. I’ll give you a moment to decide whether to get the Guns N’ Roses song out of your head or use it as a backdrop for this juicy quotation:

If hardware designers merely use Moore’s Law to deliver more big fat cores, on-device hardware parallelism will stay in double digits for the next decade, which is very roughly when Moore’s Law is due to sputter, give or take about a half decade. If hardware follows Niagara’s and MIC’s lead to go back to simpler cores, we’ll see a one-time jump and then stay in triple digits. If we all learn to leverage GPUs, we already have 1,500-way parallelism in modern graphics cards (I’ll say “cores” for convenience, though that word means something a little different on GPUs) and likely reach five digits in the decade timeframe.

But all of that is eclipsed by the scalability of the cloud, whose growth line is already steeper than Moore’s Law because we’re better at quickly deploying and using cost-effective networked machines than we’ve been at quickly jam-packing and harnessing cost-effective transistors. It’s hard to get data on the current largest cloud deployments because many projects are private, but the largest documented public cloud apps (which don’t use GPUs) are already harnessing over 30,000 cores for a single computation. I wouldn’t be surprised if some projects are exceeding 100,000 cores today. And that’s general-purpose cores; if you add GPU-capable nodes to the mix, add two more zeroes.

The big takeaway for software engineers like me is that we’d best be learning how to develop solutions using the emerging APIs so that we can harness all of those extra orders of magnitude of scalability. That involves figuring out how to . . .

  • Deal with the processor axis’ lower section [of Sutter's chart] by supporting compute cores with different performance (big/fast, slow/small).
  • Deal with the processor axis’ upper section by supporting language subsets, to allow for cores with different capabilities including that not all fully support mainstream language features.
  • Deal with the memory axis for computation, by providing distributed algorithms that can scale not just locally but also across a compute cloud.
  • Deal with the memory axis for data, by providing distributed data containers, which can be spread across many nodes.
  • Enable a unified programming model that can handle the entire [memory/locality/processor] chart with the same source code.

Perhaps our most difficult mental adjustment, however, will be to learn to think of the cloud as part of the mainstream machine — to view all these local and non-local cores as being equally part of the target machine that executes our application, where the network is just another bus that connects us to more cores. That is, in a few years we will write code for mainstream machines assuming that they have million-way parallelism, of which only thousand-way parallelism is guaranteed to always be available (when out of WiFi range). . . .

If you haven’t done so already, now is the time to take a hard look at the design of your applications, determine what existing features — or better still, what potential and currently unimaginable demanding new features — are CPU-sensitive now or are likely to become so soon, and identify how those places could benefit from local and distributed parallelism. Now is also the time for you and your team to grok the requirements, pitfalls, styles, and idioms of hetero-parallel (e.g., GPGPU) and cloud programming (e.g., Amazon Web Services, Microsoft Azure, Google App Engine).

p.s. — I can’t believe that it’s been almost four years since I took a course with Herb out in Washington. That was some hard-core learnin’.

Posted in Computing, Fodder for Techno-weenies, Software Engineering | Leave a comment

We Need a New Mindset

Guy Steele drops a truth bomb.

(From How to Think about Parallel Programming: Not!)

Posted in Computing, Fodder for Techno-weenies, Software Engineering | Leave a comment

Thinking Differently about Software Optimization

Yesterday morning while eating my “Free Wednesday Breakfast” chocolate croissant and fresh fruit with yoghurt, I watched an interview with John Nolan entitled “The State of Hardware Acceleration with GPUs/FPGAs, Parallel Algorithm Design.” In the spirit of giving back, I’m posting a few notes.

  • When optimizing code for GPU, FPGA, or CPU, definitely focus on pipelining and overall throughput, not just local optimizations.
  • There’s a trade-off between “faster” and “sooner.” It’s not always worth saving a few seconds (or even a few minutes) if the kernels take hours or days to compile. (Then again, sometimes it is.)
  • Try to reduce dependence on the language/compiler “stack” that removes inefficiencies. The optimizer does good work, but you can do things to help it. Think about the hardware or architecture format. It’s not a sin to reduce the amount of abstraction in the service of performance. Pay attention to things that affect processor pipelining and cache movement.
  • BTW, some languages and technologies exist to provide higher level programming that’s close to the hardware, but they’re proprietary, secret, or still in R&D.
  • Use algorithmic optimization techniques. Step back and find the shortest-time computation.
  • Avoid using if statements. The goto construct is considered harmful, but if is basically the same thing. Instead think about state machines and polymorphism. There’s no branch-prediction penalty to pay, since the system “just is” in the state it’s supposed to be in. The logic is clearer, because there are no switches, making it easier to test, too.
  • Don’t always assume that floating-point values are necessary. Integers can often be creatively used and are far faster for math than double-precision numbers.
  • Of course, there’s a compromise between speedy/efficient and readable/maintainable.
  • Aim to structure programs as “symbolic intent.” Mathematical descriptions are bad ways of expressing programs. Think about functional programming models instead of procedural.

If you want to know more, you should definitely watch the half-hour interview. And if your reaction was more along the lines of “Yes, yes; that’s all true, and it’s how I design my image processing code,” then I definitely hope you’ll consider applying for the GPU/multicore engineering position we have open.

Posted in Computing, Fodder for Techno-weenies, From the Yellow Notepad, Software Engineering | Leave a comment

Now Hiring Image Processing Software Engineers

My group at work—the Image Processing and Geospatial Computing Group at MathWorks—is hiring a couple of software engineers. One of them could be you.

We need someone with GPU and multicore programming skills. We’re looking for experience with CUDA, OpenCL, OpenMP, Intel’s Threading Building Blocks, or similar technologies. If you’re into making algorithms run wicked fast, you should definitely apply.

The other position focuses on image processing and code generation. If you like implementing image processing algorithms and converting MATLAB code to C code, then this is the job for you.

I’ve been at The MathWorks for almost fourteen years now, and it’s a really great company with an excellent corporate culture, competitive compensation, fantastic benefits, and lots of perks. Because everyone uses MATLAB and because we’ve made some very sensible business decisions over the last 28 years, it’s a very stable company to work for. (Did I mention that we’re putting up our fourth building in our Natick campus? And I think I also mentioned that the entire worldwide staff went on a cruise a few years ago.)

If image processing isn’t your thing, we have dozens of other positions open. Everything from web development to legal department work. Human resources to customer service. Technical writing to application engineering and consulting. Marketing to program management. QE, sales, usability, and more software development positions than you can shake a stick at.

Come, help us accelerate the pace of engineering and science worldwide. And if you do apply, tell them I sent you.

Posted in General, Software Engineering | Leave a comment

QCon SF 2011 Software Engineering Conference Notes

It’s sometimes possible to forget when reading all of the posts here about travel, diabetes, triathlon, and photography that they’re just a small part of my life. I have a job to which I devote a whole lot more time. I don’t talk about it much because (a) discussing what I’m working on putting into the Image Processing Toolbox isn’t appropriate or allowed, and even if it were (b) talking shop probably isn’t that interesting to most of the people here. But—believe it or not—the majority of traffic to my site lands on the pages that are technical, so I don’t feel so bad about posting the random “fodder for techno-weenies” post. (It’s a term of endearment, I promise! :^)

This is another one of those posts. Every year between Christmas and New Years Day, I try to use the quiet week to get stuff done and tie up loose ends. Last year, I cleared out a bunch of notes. This year, I’m looking at presentations and slides from the QCon SF 2011 conference (wrap-up). Its focus on software architecture and project management is about 75% of my job, so many of the presentations seemed tailor-made for me. Here’s some of what I learned.

Erik Doernenburg. “Software Quality: You Know It When You See It” has a really good slide deck that got me thinking about some projects I might want to set up. It’s full of practical, usable suggestions:

  • View the code at the 1,000 view, rather than ground-level or 30,000 feet.
  • Look at the test-to-code ratio, not just code coverage.
  • Graph the change of metrics between versions and revisions, compare across different parts of the code, and look at them relative to industry standards.
  • Measure the “toxicity” of code by rolling up various quality metrics about a bunch of modules into stacked bar charts.

We should pose these questions during design and code reviews:

  • Is the software/change of value to its users?
  • How appropriate is the design?
  • How easy is the code/design to understand and extend?
  • How maintainable is the software?

It was full of some really great links to things like Metrics tree maps (a.k.a., pretty heatmaps for source code) as well as a few tools: SourceMonitor, iPlasma, and using Moose to visualize quality.

Joshua Kerievsky. “Refactoring to Patterns” — some notes:

  • Refactoring is like algebra’s equivalence-preserving manipulations. “Design patterns are the word problems of the programming world; refactoring is its algebra.”
  • Understanding the refactoring thought process is more important than remembering individual techniques or tool support.
  • Code smells have multiple refactoring options and often benefit from composite refactorings.
  • Look for automatable refactorings first. Consider changing the client of smelly code before the smelly code itself.

Guilherme Silveira. “How To Stop Writing Next Year’s Unsustainable Piece Of Code” was pithy and thought-provoking.

  • There is no value for architecture or design without implementation. That’s just interpretation of the software.
  • “New language. New mindset. new idiomatic usage. Same mistakes.”
  • Complexity and composition are natural and good, but if they’re invisible, they’re evil.
  • Start with a mess and refactor right away. Starting “right” is hard (and misguided thinking). Refactor for better, not just prettier.
  • Make complexity easier to understand and see.
  • Hiding complexity in concision hurts testability, since no one knows the complexity is there. Furthermore, if it’s hard to test, it’s also hard to use correctly.
  • “Model rules. Do not model models.”

Michael Feathers. “Software Naturalism: Embracing The Real Behind The Ideal” is a presentation that I would like to see/hear, since the slides seemed full of information but weren’t self-explanatory. Here are two things I could glean: 80% of software defects in large projects were in 20% of the files. In general, the more churn in a file, the more complex it tends to be.

Panel: “Objects on Trial” was perhaps the most unusual presentation, since it was a mock-trial. I use objects all the time . . . some of them are good . . . some demonstrably so. Even so, I never latched onto the idea of object-oriented (OO) design versus objects as types. The four panelists, in one way or another, basically said, “That’s the problem.”

One of the panelists drew an extended analogy between the space program and OO. The space shuttle (which we all love) was fixated on reuse but basically was a waste of heavy lifting; people don’t reuse the right stuff. In software, object reuse is largely accomplished by cut-and-paste copying of boilerplate code that does close to what you want. Of course, the panelist acknowledged that we do reuse the ideas in OO via design patterns, and no one seems to have much of a problem with that. Ironically, having a rich pattern language means that software engineers are in a better place than ever before to use objects correctly.

A key problem with our approach to objects is that we’ve failed (generally in software engineering) to handle complexity well, which was supposed to be the point of OO design. A conflation of beauty and OO design makes things worse. Internally, software is ugly, and beauty shouldn’t be a goal. Making a fetish of beauty makes code inflexible because people don’t want to extend the beautiful thing that works.

For other panelists, objects weren’t the problem at all. For them it’s static typing in “OO languages,” such as C++, Java, and C#. We’re at a place now where all of the good things about OO have been lost in an attempt to make OO languages as fast as C. This runs counter to the goal of having “ordinary,” understandable code. Generic programming using strongly typed (possibly template heavy) languages just makes everything complicated.

For me, it’s moot. C++ is what I use, and I don’t have a large proprietary object system that I can tap into for reuse. I’m in the camp that uses C++ objects to generate new types for data hiding and aggregation, as well as (to a lesser extent) reuse. But some of these types are generic, template classes that are hard to understand. I plead “no contest.”

Posted in Computing, Fodder for Techno-weenies, From the Yellow Notepad, Software Engineering | Leave a comment

Things You Should Be Reading – August Edition

Hey everybody, I’m about a week late with the August edition of “Things You Should Be Reading.” There’s a little bit of something for everybody here.

Posted in Diabetes, General, Health Care, Software Engineering, Worthy Feeds | Leave a comment

Things You Should Be Reading

Hey, everybody. It’s that time again. The time to clean out a bunch of links that I’ve read and share them with you because I think you might find them interesting.

Posted in Computing, Cycling, Data-betes, Diabetes, Software Engineering, Worthy Feeds | 1 Comment

App Update

Today a bunch of my online peeps were in California visiting Medtronic. I wish I’d been invited to go to, but that was not the case. Had I been there, I would have squealed like a little schoolgirl at the pre-announcement that they’re rolling out support for uploading and using CareLink on a Mac next week.

Not only is that great for me when working with my own data, it will make developing my app easier. People may still need to take the extra step of downloading a CSV file containing their data, but at least they’ll be able to do it on their platform. Not perfect, but better.

In an ideal world — the one that I would have advocated for at pump/CGM HQ — third-party app developers (like me) would be able to ask the online CareLink database for a person’s diabetes data via an application programming interface (API). Mobile app developers could then hold on to that data for offline or mobile use without ever needing to talk directly to the medical devices themselves. Frankly, writing code to connect directly to a life-preserving medical device is quite risky and something I would like to avoid; it’s also the kind of thing that requires rigorous, time-consuming, expensive FDA approval. Not very appealing when all I want to be is a data consumer.

I’m hoping that Medtronic provides a mechanism to open up this data soon, because I’m getting close to being able to benefit from it. And when I say “this data,” I mean “our data” because it really is ours. We’re the ones who generated the data through our self-managment decisions, and we’re the ones who will benefit the most from using that data to make decisions. All I’m really asking for is a way to log in to CareLink without using a web browser and to retrieve data securely.

I’ve been working on my pump+CGM data visualizer a lot recently — most evenings in fact. On my Mac, I can extract events from a comma-separated value (CSV) file generated on the CareLink website, and I can pick out “interesting” events that are relevant for self-management. Now I’m working on being able to store those interesting events in a form that I can send to my iPod. (Then there are the tasks related to visualizing the data, but I’m starting small.)

It’s taking me longer than I expected to build this application. Objective-C isn’t hard, but learning the ins and outs of any new framework library is always a bit involved. (Turns out I’ve been using a lot more C++ than I had expected . . . not that there’s anything wrong with that.) And I realized that I actually need to build two applications: one part that sits on a “traditional computer” that can talk to CareLink and the other that visualizes the data on an iPhone, iPod, iPad.

Here’s a little example of the raw data that I will eventually use to generate graphs and an annotatable logbook:

3/30/11|16:20:00|GlucoseSensorData|AMOUNT=106, ISIG=10.2
3/30/11|16:25:00|GlucoseSensorData|AMOUNT=98, ISIG=9.71
3/30/11|16:30:00|GlucoseSensorData|AMOUNT=98, ISIG=10.59
3/30/11|16:35:00|GlucoseSensorData|AMOUNT=100, ISIG=10.66
3/30/11|16:40:00|GlucoseSensorData|AMOUNT=102, ISIG=10.94
3/30/11|16:45:00|GlucoseSensorData|AMOUNT=102, ISIG=10.6
3/30/11|16:50:00|GlucoseSensorData|AMOUNT=102, ISIG=10.56
. . .
3/30/11|18:14:01|BolusWizardBolusEstimate|BG_INPUT=195, BG_UNITS=mg dl,
3/30/11|18:14:01|BolusNormal|AMOUNT=1.7, CONCENTRATION=null,
Posted in CGM, Data-betes, Diabetes, Fodder for Techno-weenies, Software Engineering | Leave a comment

Seriously Now. Let’s Start Coding.

Okay, I’ve picked up a smattering of Objective-C, learned about a few of the frameworks, and sketched some of the interface. I’ve contemplated the data model, and I’ve worked out a few of the interactions.

Isn’t it about time to put aside my “iPhone developer impostor” feelings and just start coding? Yes it is. I’m not going to build this app unless I start writing it.

ABC: Always Be Coding.

Posted in Data-betes, Life Lessons, Software Engineering | Leave a comment

Ready to Start Coding

It’s been a week since I announced that I was going to write an iPhone app. I’m still excited about it, even though someone told me that Medtronic is working on their own version of the same thing I proposed — a working prototype, she said. Well, I for one am glad that they seem some value in having a mobile app, but I plan to keep working on mine. Competition is good. We’ll see who can get their app out there first: the newbie picking up iPhone development skills or the large medical company who is going through FDA approval.

Since last Saturday I’ve learned a lot. I’ve picked up the syntax of Objective-C, which is causing this C++ programmer to “think differently.” I like what I’ve seen from it so far, but we’ll see what I think after building something real. I’ve made a couple “Hello, World!” applications, just enough to get a few basic skills using Xcode and Interface Builder.

Now the hard work begins.

I’ve made a list of requirements for the first couple of (internal) versions of the app; so I know what needs I plan to satisfy. I’ve picked an external library to plot the CGM data. And I’ve started working on the functional design, sketching a few different views that people will use to interact with their data. (I usually hate graphical user interface design, but something about the UIKit components seem to be amplifying my scanty abilities with interaction design.) I still have to figure out the data model — that is to say, the architecture — but I think that should follow from the views I create, which of course is supposed to visualize items in the pump/CGM data model.

Tomorrow I’ll try to put a few components together. Stay tuned!

Posted in CGM, Computing, Data-betes, Diabetes, Fodder for Techno-weenies, Software Engineering | 1 Comment

Total Diabetes Awareness, The App

I am terrible with personal software projects. At work, I have no problem getting things started and finished. But elsewhere, I’m just a bit too distracted by everything else in my life to engage in some casual programming.

But this morning, as I was putting on my socks — and checking my diabetic feet, of course — I struck upon a project that I would gladly spend at least a few evenings and weekends to get working.

Throughout the day I’ve been talking myself into writing an iPhone app to display all of the data from my pump and continuous glucose monitor. I’ve already learned a great deal from my CGM, but what I need now is a memory device that collects all (or at least most) of my diabetes data in one place, so that I can use what’s worked in the past (as well as what didn’t work so well) in order to make better decisions.

The idea behind the as-yet-unwritten software is to transfer all of the data and events from my part of Medtronic’s CareLink website — it stores my CGM sensor values, blood glucose readings, insulin boluses, temporary basals, infusion set changes, etc. — and store them on my iPod, where I can review and plot them graphically. (And if it works on my iPod Touch, it would run on an iPhone or iPad, too.)

Why do I want to do this? There are many situations that happen regularly but not quite frequently enough for me to remember what happened. What happened the last time I had Thai food? How much insulin did I give? What did I estimate the carbs to be? And what about the last time I went for a long bike ride? What temporary basal did I use? I could write all this down, but (as I’ve recently discovered) there’s a great deal of value in seeing the CGM trace surrounding the event.

So my app will be fairly small to begin with:

  • Import data from a CSV file that I can download from the CareLink site.
  • Plot the CGM graphs and display the insulin, BG, and pump events.
  • Display details about these events. What did I enter into the pump’s bolus wizard? What did I end up doing?

That’s a first version. That will let me carry around a self-updating logbook. Journaling is something we hate to do, so why not just aggregate the data that I already produce minute by minutes?

One or two improvements would make this app truly useful. The pump and CGM have a lot of data, but they lack context. I need to be able to add a tag or two and some notes to the handful of events that I want to highlight. That’s the first step; the next is to be able to search for those events and then look at what happened.

Once I’ve got that app working, who knows what could happen?

Of course, why do I want to make this app, when I’m usually so reluctant to write software away from the office? Plainly put, I need this application to improve my self-management. Observations of daily living are among the most powerful components of managing a chronic illness, but they are a complete pain in the ass to record manually.

Clearly this is something that Medtronic should be doing. It would greatly simplify things if I could sync directly from my pump to my iPod, and I’ve already tried without success to get them to tell me the data protocol of my pump. It’s my data after all, but they won’t talk. (Hell, it would be much better if I could download directly from CareLink to my iPod rather than doing a kind of crazy two step of saving a .csv file and uploading it to my iPod.) Perhaps Medtronic already is working on such an application, but I can’t count on it. Seeing the results of my actions is so useful that I will take one for the team and start writing this application.

My ultimate goal is to share the app and the code with the world. I would like very much to make this an open source project . . . the first salvo in a “test strip rebellion” where we people with diabetes take back the data stored on our medical devices. If the medical device manufacturers won’t make these apps, we must. If the FDA is going to make it difficult for commercial ventures to produce innovative solutions, we patients will have to turn the tables; it’s true that we’ll have to accept the risk for what we do, but at least we will have what we need to improve our health.

Oh! And I will need your help. I’ve never written an iOS application. If you have some skills, I might want to ask you a couple of qustions. First up: I need recommendations for newbies making their first forays into iPhone app development.

Posted in CGM, Data-betes, Diabetes, Software Engineering | 2 Comments