GSoC 2017 – Operation Theater Module Workflow Enhancements

This post originally appeared in OpenMRS Talk as my final presentation.

Operation Theater Module Workflow Enhancements

Mentors: Akshika Wijesundara & Harsha Kumara

Code contribution Summary: GitHub

There’s a certain pleasure in seeing people make use of your work. That’s one of the main driving forces behind the open source philosophy. It’s deeply satisfying to know that your lines of code makes someone’s life easier, even if by just a little bit.
But what about not just making lives easier, but helping to save entire lives? That’s some next level thing. Enter OpenMRS.
This summer I had the opportunity to contribute with code to save lives, and this is my final note about the experience.


My project was to work with the operation theater module which can be used to manage theater activities in hospitals. It had been made as a previous GSoC project, but not updated with the rest of OpenMRS.
So I had two primary targets:

  1. Migrate the module to work with the latest OpenMRS platform.
  2. Perform workflow enhancements to make it more useful.

And migrate and enhance we did.


Migrating the Operation Theater module

The OT module was written for platform 1.9.7, in the age of Java 7. Fast forward to 2017, and we’re having platform 2.7 in beta.
So as my first task, I set out to migrate the module to the latest stable OpenMRS platform – 2.6. This involved a number of things, mostly related to replacing outdated components in the module code base.

Replacing Joda Time library

The module solves a constraint satisfaction problem – scheduling surgeries to available operation theaters. It uses the Optaplanner library for this task. This involves coding the available times of operation theaters, the surgery durations, avoiding surgery overlaps and making maximum use of the available times.

The original implementation modeled the problem with programmatic concepts and implementations of time available in 2014. It used the Joda Time library for things like intervals, durations and timeline management. This library was deprecated and became obsolete with the introduction of the Time package in Java 8.

What’s more, the use of the Joda Time library raised many dependency issues as the same library’s different versions were being used across different modules in OpenMRS. We had a nightmare with class loader constraint violations.

After discussions with my mentors, I replaced the Joda Time library with the Java standard library’s Time package, and the ThreeTen-Extra package. Together, these two packages provide the essential features such as intervals and durations required as part of the CSP solution.
Elsewhere, OpenMRS had moved on from Hibernate 3.x, while the OT module still used the older version. We did the needful to move to newer Hibernate versions.

Thus went down the primary target. Here are some views from the module.

Next I started on performing workflow enhancements.


Workflow Enhancements

Workflow enhancement mostly focused on the data collection throughout a surgery’s life cycle. During the initial planning period, we had identified a set of data that would be useful for theater activities. These encompassed pre-theater, post and in-theater activities.

As for pre-theater data, our initial plan was to collect the following.

  1. Past surgeries
  2. Pre-theater prescriptions
  3. Allergies
  4. Fitness for surgery / physical condition

My first thought was to collect these data as free text. Let someone type them as notes for the surgical team. But it was later pointed out that we should ideally collect the data with concepts and observation groups. This made more sense, as it organized the collection of data and provided a way to query the information more readily via OpenMRS. In the long run, this would allow users to generate reports on theater activities.

I was warned that implementing data collection with obs groups may be tougher and would take longer. After discussing with my primary mentor, we decided to implement as much of the data collection as we can using concepts and obs groups.

In choosing the concepts for obs groups, @ball provided guidance and advised to use concepts already available, as this would make the code adhere to established standards, allowing collected data to be analyzed beyond just OpenMRS. She pointed me to CIEL, where past procedure data collection could be done with the following concepts:

  • Procedure History (Grouping concept)
    • Procedure performed.
    • Procedure date/time
    • Procedure comment.

The next problem was to ensure that these concepts would be available in the work environments where the module was deployed. Again, @ball provided examples of how this has been done in other implementations, and I added the concepts to the module activator.

The module activator checks if a deployment environment already has the procedure history concepts from CIEL, and adds them if not. Along with that, I added a new concept group for gathering surgery data across its workflow as below.

  • Procedure information (Grand-parent grouping concept)
    • Pre-theater drugs (parent grouping concept)
    • In-theater drugs
    • Post-theater drugs
    • Procedure History (same as before)

All three drug concepts are parent groups that facilitate adding drug prescriptions as observations of the surgery. So we’ve achieved collection of data with concepts and obs groups, thereby standardizing the data collection.
The point here is that this allows the data to be identified across systems, rather than being confined to OpenMRS. It’s better to use an international convention so that more people can make sense of the data without going through the hassle of another concept dictionary.

Here are some views that portray the data collection within the module. Oh and I’m not the most knowledgeable person on drug-related terms. Please do excuse if I’ve mixed up pills and tablets, or given a drug where you never do. :smile:

Pre-theater data:

Post-theater data

As for in-theater data, I felt that it’s best to add them to the post-operative form.

GSoC 2017 – Week 4

It’s the fourth week, and we’re nearly done with the first round of developments. My main objective for the first round was migrating the Operation Theater module to the latest platform, and I’ve completed about 80% of it. There’s just one more thing to finish – to get the scheduler working.

The OT module uses a tool named Optaplanner for scheduling theater activities. It’s an open source constraint satisfaction solver written in Java. We can model the theater planning activity as a constraint satisfaction problem (CSP) and use Optaplanner to get a somewhat optimal solution in a given amount of time. As you might know, CSPs are  considered to be either NP-complete or harder.

Let’s start with a few definitions before we get to the nitty-gritty.

Constraint satisfaction problem – a problem with a limited set of resources, and constraints on possible paths to a solution. A solution needs to satisfy the given constraints and employ available resources.
NP-completeness – the abbreviation NP means Nondeterministic Polynomial time. We’ll need to go way out of our topic to describe what each of those terms mean exactly, so let’s get a high-level idea. Simply put, NP-complete means the following;

  • It’s easy to verify a given solution of such a problem, in a reasonable amount of time.
  • We don’t know an efficient method to get such a solution.

The best-known characteristic of NP-complete problems is that we don’t have a fast way to solve them. All current methods grow exponentially more complex as the size of the problem grows. 😐

You already know that theater scheduling is a CSP. What else are CSPs? Well, think about creating timetables in a university. Or scheduling nurses in a hospital. Planning car assembly lines. Investment portfolio optimization. Cutting steel sheets in an optimal manner. All of these are constraint satisfaction problems. Aaand planning problems. They have goals that they wish to optimize and limited resources under constraints. You can refer the documentation of Optaplanner for a good description of them[1].

Anyway, the OT module has an implementation of Optaplanner based off of version 6.0.0. That one isn’t working nicely with the shiny new other libraries that we’re using, so I need to migrate it to version 7.0.0.

There are two ways I can do this.

  1. Make the changes that happened in Optaplanner through versions 6.0.0 to 7.0.0.
  2. Write a new solution in the latest version, using the earlier implementation as a guide.

I thought a little on this and realized that while I could write a new solution, then I’d need to check how the current implementation communicates with the Optaplanner engine and write matching API endpoints to ensure proper functionality. Given the limited amount of time I have with all the work of the internship, I thought it’s best to first try to migrate the existing solution.

So right now, I’m working on making necessary changes to the solution so that I can get it running asap. Let’s see how optimal we can get.

Biometric Identification with Keystroke Dynamics Using Java

So this started as an assignment but turned out to be quite fun by the end of it. We were asked to make a biometric identification system with Java using any biometric identifier. There was one restriction – no libraries were allowed. Out went OpenCV and pretty much anything that could be made to work reliably within the time-frame of a week.

My first thought was to make a voice identifier – a system in which the distinct features of someone’s voice would be saved in order to identify them uniquely. But that turned out to be far more time-consuming than I thought and involved libraries which I could not adopt. There wasn’t much time to write them myself. So after 3 days of struggling, I switched to keystroke dynamics.

What’s keystroke dynamics? It’s the official term for the unique way you type on a keyboard. Turns out that each of us use a keyboard differently – with more than just a change in overall speed. We spend different amounts of time pressing each key and moving onto another, have our own favourites among the Shift keys and have this unique patterns while moving from one key to another. I’m sure you’ve noticed these things. And these little things add up to a relatively strong set of features that can uniquely identify someone.

To make things more organized, let’s start with biometrics in general. A biometric identifier is a distinct, measurable characteristic that can be used to label and describe individuals[1]. You’ve probably used the most popular one – that fingerprint scanner on your iPhone or Android. There are two main categories of biometric identifiers.

  1. Physiological – relating to your body
    • Fingerprints, iris recognition, face recognition, DNA, hand geometry, body smell and more
  2. Behavioural – involving the way you behave
    • Voice, typing rhythm, gait (basically the way you move)

Intro’s done. In our case, keystroke dynamics has the features I mentioned before, with some fancy technical terms for most of them. Here are some definitions to make our lives easier.

  • Dwell time – the time a key is kept pressed down.
  • Flight time – period between pressing times or releasing times of two keys. Sometimes measured as time between releasing one key and pressing another.
  • Digraphs & trigraphs – pairs & trios of letters that frequently appear together. In English, these inclue th, sh, tr, wh for digraphs and nth, sch, scr, spl, spr, str, thr for trigraphs[2]. More generally, these are known as n-graphs.

Take a moment to think about these, especially the flight times and n-graphs. You would realize how each of us have different latencies associated with each n-graph. I myself type (a,s) and (i,o) within 50 milli seconds.

The verification methodology of this prototype application is quite crude at the moment, as mentioned below.

I first let the user type an arbitrary text and measured the dwell times for each key in a conventional US keyboard – the time each letter in English alphabet is kept pressed down. For example, user X dwells on key “Q” for 120 ms. This is taken as the average of all dwell times of occurrences of “Q” in the text. Then when a user, say Y, tries to identify himself as X, Y’s dwell time of “Q” is measured and let’ say it turns out to be 90 ms. The system checks whether the time difference for “Q” is within a range of 20 ms – did Y type within a safe range of 100 to 140 ms? If not, the system counts that Y failed to pass “Q”. If 10 or more such failures occur, Y is identified to be different from X.

Obviously, this is a very rough and arbitrary measurement. Test it with your friends and you’ll realize that the dwell times are more common than expected. But it was considerably accurate, given that only one parameter was being used. This is more of a proof-of-concept (and a last-moment assignment submission) to be fair. The system is to be extended with flight times and n-graphs in the future. Maybe.

You can find the code in GitHub. Give it a test run, why don’tcha?

Or better yet, extend the feature set.

Bis bald!

References:
[1] https://en.wikipedia.org/wiki/Biometrics#Soft_biometrics
[2] http://www.enchantedlearning.com/consonantblends/
http://www.cs.columbia.edu/4180/hw/keystroke.pdf
https://www.hindawi.com/journals/tswj/2013/408280/