Archive for the 'Standardisering' Category

Every year for the past couple of years, I’ve been giving a lecture to the “International Masters Program in Health Informatics” at the Karolinska Institute in Stockholm. Tomorrow I’ll give another one. The notes for that lecture are fairly complete and can be read on their own as a text which concisely outlines my major ideas and problems about electronic medical records. Suddenly it struck me that I should publish those notes here, and not just hand them out to the students as I usually do. So here they are.

Handout 20131216

Det här blir mitt tredje och sista inlägg om SFMI symposiet den 8:e november, hoppas jag. Fast eftersom jag fortfarande ibland vaknar på natten med en ny detalj jag påminner mig, ruskar på huvudet och har svårt att somna om, så vet man inte säkert att så blir fallet.

Mitt i våndan med NI och SNOMED CT som inte passar ihop och hur man ska gå vidare, tänkte jag i min enfald att jag skulle ge en nyttig abstraktion och kanske t.o.m. lite vägledning. Så jag grep ordet igen (det brukar bli så med mig, efter att ha fått ordet ett par gånger blir det alltmer svårfångat och behöver jag bli mer assertiv för att kunna fortsätta att avbryta, så därför grep jag det den här gången).

I min värld (nej, jag är inte helt ensam i den) finns det ett bra grepp att ta till när man inte vet vilken väg man ska välja i en design. Man backar nämligen ett steg. Precis som dom flesta implementeringsproblem kan lösas med en extra “level of indirection“, kan dom flesta designproblem lösas med “ett steg tillbaka“. Och steget tillbaka i det här fallet är att fråga sig själv vad målet är med designen. Vart vill man komma?

Vill man förbättra läkarens kunskap om patientens historia? I så fall har man vissa val av design som är mer självklara än andra. Man bör välja termer och visualiseringar som återger en klinisk etablerad verklighet bra, och endast i andra hand vara informationstekniskt konsistent.

Vill man förbättra läkarens tillgång till kunskapsbaser? I så fall ska man välja en design som lättare ansluter till koderingar i etablerade och framtida kunskapsbaser.

Vill man förbättra landstingens mätningar av vården? I så fall ska man välja en design som ger maximal output av statistiska värden och tydligt gör skillnad på olika tillstånd, förbättringar i tillstånd och val av diagnosmetoder och terapier.

Vill man förbättra kvalitetsregistren? Då ska man välja strukturer som passar dessa register och ger minimalt dubbelarbete.

Vill man vara politiskt korrekt och se till att patienten kommer åt och förstår journalen? Ja, då vet jag inte vad man ska göra. Då är det nog bäst att fortsätta att springa i cirklar eftersom den idén är fatalt feltänkt.

Valen av steg i designen är alltså helt beroende på vilka prioriteter man ställer på lösningen i sin helhet. Med andra ord, vad systemet ska vara till för i första hand, i andra hand, osv. Det går inte att få allt på en gång, eller allt med samma första prioritet. Så mitt råd var alltså: låt oss spendera en minut till att definiera vad det ultimata målet av projektet är, sen kan vi svara på hur det ska designas i detalj.

Men inte fan. “Projektet är ett uppdrag av Socialstyrelsen. Och förresten har vi absolut inte tid för sådana där filosofiska diskussioner. Vi måste få det här projektet gjort först.” Det finns nog bara ett ord för det här:

Doomed

Vi var på ett symposium som SFMI organiserade i början av veckan. Det handlade om standarder i medicinsk informatik. Själv har jag alltid betraktat mig som motståndare till standarder inom vård IT, åtminstone efter min period som observatör i CEN gruppen för vård IT standarder. Men när jag nu tänker efter så är jag egentligen inte en motståndare mot standarder, bara en motståndare mot det sätt standarder utvecklas och därmed så gott som alla standarder som utvecklas på det sättet. Dom utvecklas nämligen på precis samma sätt som software utvecklades på 60-70 talet, med vattenfallsmetoden och resulterar i motsvarande sorts överfet och oanvändbar junk.

Symposiet bestod av två delar: en lång uppradning av alla standarder som har hittats på och av vem, och konstateringen att vissa av dom faktiskt borde vara användbara på nåt vis, även om det inte är riktigt klart precis hur och varför. Andra delen var en enda lång klagan över att standarder inte passar ihop med varandra för fem öre.

Det var precis så här dom flesta stora IT projekt gick under förr. Man utvecklade alla moduler i isolation och när man trodde man var klar så försökte man integrera modulerna med varandra och upptäckte att man jobbat med helt olika idéer om helheten. Vid det laget var det för sent och kunde man kasta bort hela projektet. Precis som med vård IT standarder. Var och en för sig är dom stora och dyra och verkar ha mening, men tagna tillsammans är det bara en gigantisk hög dynga.

Hur löste vi det i software development, då? Jo, med agil utveckling och med “nightly builds”. Idén är att man utvecklar en absolut minimal modul av varje sort, sen bygger man hela systemet varje dag (eller natt). Från första dagen märks det om modulerna inte passar ihop eller inkräktar på varandras territorium.

Om vi vill ha standarder inom vård IT som kan användas till någonting behöver man nog använda samma metod. Utveckla en minimal standard av varje sort och använd alla standarder tillsammans i ett testprojekt från första dagen. På så sätt kan man kasta ut onödiga eller felkoncipierade standarder på ett tidigt stadium, i stället för 10 år och 100 miljoner kronor senare.

I’m just copying a post here I just did to a closed forum for CISSPs.

A couple of days ago, I had to create a death certificate in Cosmic, the EHR system produced by Cambio Healthcare Systems and used in many provinces of Sweden and increasingly abroad.

So, I opened up the records for the patient, created a new death certificate form and filled it in. Printed it out, since it needs to go the paper route to the IRS (in Sweden, they handle the population registry). Then, just to make sure my data matched the EHR entry I made a few days before, I opened up the form again and discovered four different entry fields had changed after I saved. Two adress fields were blanked, my “place of employment” was changed to “Summer house” (part of another field I had filled in) and finally, my telephone number I had added was blanked out. I corrected the fields and resaved, same thing happened again. Did it three times, same thing. I never signed the document, of course, instead having a secretary scan in my paper form, which was correct, and have that put in the EHR. The erroneous form remains there, but unsigned.

I pointed out this severe bug to the IT department, and the reply I just got went into some depth explaining to me what the different fields were supposed to contain, but they didn’t touch at all on the hairraising fact of changing the documents behind my back. That’s apparantly entirely ok for them.

In this scenario, I never signed, but if I had done that, nothing would have played out differently. The scary thing is that the normal workflow is to fill in a form, any form, print it out (optionally), then sign it, which flags it as signed and saves it in one operation. You never see what actually gets saved with your “signature” on it. We’ve had a number of bugs before, where dates were changed in sick leave forms, a number of crucial fields erased and so on, so this is just the last in a long series of such bugs.

This system, the largest on the Scandinavian market, uses Acrobat Reader (yes, you read that right, *Reader*) to fill in forms. So they prepare the form data in the background, launch the Reader, lock it down modally since they can’t handle the interactions right, then let you edit and save. The “save” and “signature”, even “delete” buttons are implemented *inside* the document form since they run modally. Just to give you an idea of the “leading edge technology” we’re talking about here.

The forms as such are designed by the end-user organisation, so the problem is in two parts: Cambio enables a sloppy workflow and does not respect the immutability of signed data in their application. The end-user organisation does not test new forms for problems.

So, my issues with all this are:

1. This product has passed CE approval. So where is the systems test? These problems are trivial to find before rollout. Not to mention that I, and others, have pointed these form problems out in public since at least two years. What’s the point of the CE, anyway?

2. If Cosmic is able to change the content of forms behind my back, why isn’t this recorded in a log? There is no way I can show after the fact that the form contains stuff I never wrote, even if I would be able to remember what I wrote and this has caused much consternation before with the sick leave forms. Why isn’t audit trailing of this a requirement from the user organisation or from the CE protocol?

3. Why does the system not warn me or show me the changed information during or after signature? It bloody well warns me for everything else I don’t need warnings for. A typical Windows app, if you get my drift.

4. Why doesn’t the “signature” mean anything? It’s simply a flag set in the system with no functional binding to the information. They’re in the process of rolling out smart cards now; I have one. You stick them into a slot on the keyboard to sign in, at least that’s the idea (doesn’t work, they don’t have the trusted root installed…). But that’s for Windows login. The “signature” in the EHR remains a dumb flag AFAIK.

Meanwhile, the law and regulations governing medical practice make a huge deal out of these signatures. We *have* to sign stuff in a timely fashion and can be sanctioned if we don’t. And if we do sign, we’re held to what we sign, legally, morally, ethically. Our careers can be held hostage by a stupid flag in a stupid database record, designed by an irresponsible designer, and implemented by an agile and equally uninformed coder.

My question is this: is this shitty state of affairs, this total ignorance of what the law and regulations say, this total lack of interest in quality and consistency in application design and implementation, something common to EHR systems everywhere? Is this laissez-faire attitude something you actively try to combat as security professionals if you work in the medical field, and if not, why not?

Or, provocatively, I’ve repeatedly heard on this list (it’s a while since last time) that doctors don’t respect security in EHR systems, but now my question is this: does anyone else? It seems not.

And finally, WTF is the point of the CE approval…? I’ve seen all the cynical answers, now I want a real answer somehow.

Now we’ve arrived at the last of the solutions in my list, namely “Opening the market for smaller entrepreneurs”. There are a number of reasons we have to do this, and I’ve touched on most of them before in other contexts.

The advantages of having a large all-in-one vendor to deliver a single system doing everything for your electronic health-care record needs are:

  • You don’t have to worry about interconnections, there aren’t any
  • You don’t have to figure out who to call when things go wrong, there’s only one vendor to call
  • You can reduce your support staff, at least you may think so initially
  • You can avoid all arguments about requirements from the users, there is nothing you can change anyway
  • It looks like good leadership to the uninitiated, just like Napoleon probably looked pretty good at Waterloo, at least to start with

The disadvantages are:

  • Since you have no escape once the decision is made, the costs are usually much higher than planned or promised
  • There is only one support organization, and they are usually pretty unpleasant to deal with, and almost always powerless to do anything
  • Any extra functionality you need must come from the same vendor, and will cost a fortune, and will always be late, bug-ridden, and wrong
  • The system will be worst-of-breed in every individual area of functionality; its only characteristic being that it is all-encompassing (like mustard gas over Ieper)
  • The system will never be based on a simple architecture or interface standards; there is no need for it, the vendor usually doesn’t have the expertise for it, and the designers have no incentives to do a quality job
  • Since quality is best measured as the simplicity and orthogonality of interfaces and public specs, and large vendors don’t deliver either of these, there is no objective measure of quality, hence there is no quality (there’s a law in there somewhere about “that which is not measurable does not exist”; was it Newton who said that?)
  • Due to poor architecture, the system will almost certainly be developed as too few and too large blocks of functionality, making them harder than necessary to maintain (yes, the vendor maintains it for you, but you pay and suffer the poor quality)

Everybody knows the proverb about power: it corrupts. Don’t give that kind of power to a single vendor, he is going to misuse it to his own advantage. It’s not a question of how nice and well-meaning the CEO is, it is his duty to screw you to the hilt. That’s what he’s being paid to do and if he doesn’t, he’ll lose his job.

But if we want the customers to choose best-of-breed solutions from smaller vendors, we have to be able to offer them these best-of-breed solutions in a way that makes it technically, economically, and politically feasible to purchase and install such solutions. Today, that is far from the case. Smaller vendors behave just like the big vendors, but with considerably less success, using most of their energy bickering about details and suing each other and the major vendors, when things don’t go as they please (which they never do). If all that energy went into making better products instead, we’d have amazingly great software by now.

The major problem is that even the smallest vendor would rather go bust trying to build a complete all-in-one system for electronic health-care records, than concede even a part of the whole to a competitor, however much better that competitor is when it comes to that part. And while the small vendors fight their little wars, the big ones run off with the prize. This has got to stop.

One way would be for the government to step in and mandate interfaces, modularity, and interconnection standards. And they do, except this approach doesn’t work. When government does this, they select projects on the advice of people whose livelihood depends on the creation of long-lived committees where they can sit forever producing documents of all kinds. So all you get is high cost, eternal committees, and no joy. Since no small vendor ever could afford to keep an influential presence on these committees, the work will never result in anything that is useful to the smaller vendors, while the large vendors don’t need any standards or rules of any kind anyway, since they only connect to themselves and love to blame the lack of useful standards for not being able to accomodate any other vendor’s systems. This way, standards consultants standardize, large vendors don’t care about the standards and keep selling, and everyone is happy except for the small vendors and, of course, the users who keep paying through the nose for very little in return.

There’s no way out of this for the small vendors and the users if you need standards to interoperate, but lucky for us, standards are largely useless and unnecessary even in the best of cases. All it takes is for one or two small vendors to publish de facto standards, simple and practical enough for most other vendors to pick up and use. I’ve personally seen this happen in Belgium in the 80′s and 90′s where a multitude of smaller EHR systems used each other’s lab and referral document standards, instead of waiting for official CEN standards, which didn’t work at all once published (see my previous blog post). In the US, standards are generally not invented by standards bodies, but selected from de facto standards in use, and then approved, which explains why US standards usually do work, while European standards don’t.

Where does all this leave us? I see only one way of getting out of this mess and that is for smaller vendors to start sharing de facto standards with each other. Which leads directly to my conclusion: everything I do with iotaMed will be open for use by others. I will define how issue templates will look and how issue worksheets and observations will be structured, but those definitions are free to use by any vendor, small or large. At the start, I reserve the right to control which documents structures and interfaces can be called “iota” and “iotaMed”, but as soon as other players take active and constructive part in all this, I fully intend to share that control. But an important reason not to let it go from the start is that I am truly afraid of a large “committee” springing up whose only interest will be to make it cost more, increase the page count, and take forever to produce results. And that, I will fight tooth and nail.

On the other hand, I’ll develop the iotaMed interface for the iPad and I intend to publish the source for that, but keep the right to sell licenses for commercial use, while non-profit use will be free. Exactly how to draw that limit needs to be defined later, but it would be a really good thing if several vendors agreed on a common set of principles, since that would make it easier for our customers to handle. A mixed license model with GPL and a regular commercial license seems to be the way to go. But in the beginning, we have to share as much as possible, so we can create a market where we all can add products and knowledge. Without boostrapping that market, there will be no products or services to sell later.

Around 1996 I was part of the CEN TC251 crowd for a while, not as a member but as an observer. CEN is the European standards organization, and TC251 is “Technical Committee 251″, which is the committee that does all the medical IT standardization. The reason I was involved is that I was then working as a consultant for the University of Ghent in Belgium and I had as task to create a Belgian “profile” of the “Summary of Episode of Care” standard for the Belgian market. So I participated in a number of meetings of the TC251 working groups.

For those that are in the know, I must stress that this was the “original” standards effort, all based on Edifact like structures and before the arrival of XML on the stage. I’ve heard from people that the standards that were remade in XML form are considerably more useful than the stuff we had to work with.

I remember this period in my life as a period of meeting a lot of interesting people, having a lot of fun, but at the same time being excruciatingly frustrated by overly complex and utterly useless standards. The standards I had to work with simply didn’t compute. For months I went totally bananas trying to make sense of what was so extensively documented, but never succeeded. After a serious talk with one of the chairpersons, a very honest Brit, I finally realized that nobody had ever tried out this stuff in reality and that most, maybe even all, of my complaints about inconsistencies and impossibilities were indeed real and recognized, but that it was politically impossible to publicly admit to that. Oboy…

I finally got my “profile” done by simply chucking out the whole lot and starting over again, writing the entire thing as I would have done if I’d never even heard of the standards. That version was immediately accepted and I was recently told it still is used with hardly any changes as the Belgian Prorec standard, or at least a part of it.

The major lesson I learned from the entire CEN debacle (it was a debacle for me) is that the first rule in standardization of anything is to avoid it. Don’t ever start a project that requires a pre-existing standard to survive. It won’t survive. The second rule is: if it requires a standard, it should be small and functional, not semantic. The third is: if it is a semantic standard, it should comprise a maximum of a few tens of terms. Anything beyond a hundred is useless.

It’s easy to see that these rules hold in reality. HTML is a hugely successful standard since it’s small and has just a few semantic terms, such as GET, PUT, etc. XML: the same thing holds. Snomed CT: a few hundred thousand terms… you don’t want to hear what I think of that, you’d have to wash your ears with soap afterwards.

From all my years of developing software, I’ve never ever encountered a problem that needed a standard like Snomed CT, that couldn’t just as well be solved without it. During all those years, I’ve never ever seen a project requiring such a massive standards effort as Snomed CT, actually succeed. Never. I can’t say it couldn’t happen, I’m only saying I’ve never seen it happen.

The right way to design software, in my world, is to construct everything according to your own minimal coding needs, but always keep in mind that all your software activities could be imported and exported using a standard differing from what you do internally. That is, you should make your data simple enough and flexible enough to allow the addition of a standard later. If it is ever needed. In short: given the choice between simple or standard, always choose simple.

Exactly how to do this is complex, but not complex in the way standards are, only complex in the way you need to think about it. In other words, it requires that rarest of substances, brain sweat. Let me take a few examples.

If you need to get data from external systems, and you do that in your application in the form of synchronous calls only, waiting for a reply before proceeding, you severely limit the ability of others to change the way you interact with these systems. If you instead create as many of your interactions as possible as asynchronous calls, you open up a world of easy interfacing for others.

If you use data from other systems, try to use them as opaque blocks. That is, if you need to get patient data, don’t assume you can actually read that data, but let external systems interpret them for you as much as possible. That allows other players to provide patient data you never expected, but as long as they also provide the engine to use that data, it doesn’t matter.

Every non-trivial and distinct functionality in your application should be a separate module, or even better, a separate application. That way it can be easily replaced or changed when needed. As I mentioned before, the interfaces and the module itself, will almost automatically be of better quality as well.

The most useful rule of thumb I can give you is this: if anyone proposes a project that includes the need for a standard containing more than 50 terms or so, say no. Or if you’re the kind of person who is actually making a living producing nothing but “deliverables” (as they call these stacks of unreadable documents), definitely say yes, but realize that your existence is a heavy load on humanity, and we’d all probably be better off without your efforts.

The quality of our IT systems for health-care is pretty darn poor, and I think most people agree on that. There have been calls for oversight and certification of applications to lessen the risk of failures and errors. In Europe there is a drive to have health-care IT solutions go through a CE process, which more or less contains a lot of documentation requirements and change control. So by doing this, the CE process certifies a part of the actual process to produce the applications. But I dare claim this isn’t very useful.

If you want to get vendors to produce better code with less bugs, there is only one thing you can do to achieve that: inspect the code directly or indirectly. Everything else is too complicated and ineffective. The only thing the CE process will achieve is more bureaucracy, more paper, slower and more laborious updates, fewer timely fixes, and higher costs. What it also will achieve, and this may be very intentional, is that only large vendors with massive overhead staff can satisfy the CE requirements, killing all smaller vendors in the process.

But back to the problem we wanted to solve, namely code quality. What should be done, at least theoretically, is actual approval of the code in the applications. The problem here is that very few people are actually qualified to judge code quality correctly, and it’s very hard to define on paper what good code should look like. So as things stand today, we are not in a position where we can mandate a certain level of code quality directly, leaving us no other choice than doing it indirectly.

I think most experienced software developers agree that the public specifications and public APIs of a product very accurately reflect the inner quality of the code. There is no reason in theory why this needs to be the case, but in practice it always is. I’ve never seen an exception to this rule. Even stronger, I can assert that a product that has no public specifications or no public API is also guaranteed to be of poor quality. Again, I’ve never seen an exception to this rule.

So instead of checking paperbased processes as CE does, let’s approve the specifications and APIs. Let the vendors subject these to approval by a public board of experts. If the specs make sense and the APIs are clean and orthogonal and seem to serve the purpose the specs describe, then it’s an easy thing to test if the product adheres to the specs and the APIs. If it does, it’s approved, and we don’t need to see the original source code at all.

There is no guarantee that you’ll catch all bad code this way, but it’s much more likely than if you use CE to do it. It also has the nice side effect of forcing all players to actually open up specs and APIs, else there’s no approval.

One thing I can tell you off the bat: the Swedish NPÖ (National Patient Summary) system would flunk such an inspection so hard. That API is the horror of horrors. Or to put it another way: if any approval process would be able to let the NPÖ pass, it’s a worthless approval process. Hmmm…. maybe we can use NPÖ as an approval process approval test? No approval process should be accepted for general use unless it totally flunked NPÖ. Sounds fine to me.

Glöm inte att registrera på Vård IT Forum, det är där det händer.

As the interest in iotaMed and the problems it is intended to solve clearly increases, we need to get our ducks in a row and make it simple to follow and to argue. Let’s do it the classic way:

  1. What is the problem?
  2. What is the solution?
  3. How do we get there?

Let’s do these three points, one by one.

What is the problem?

The problem we try to solve is actually a multitude of problems. I don’t think the below list is complete, but it’s a start.

  1. Lack of overview of the patient
  2. No connection to clinical guidelines
  3. No connection between diseases and prescriptions, except very circumstantial
  4. No ability to detect contraindications
  5. No archiving or demoting of minor or solved problems, things never go away
  6. Lack of current status display of the patient, there is only a series of historical observations
  7. In most systems, no searcheability of any kind
  8. An extreme excess of textual data that cannot possibly be read by every doctor at every encounter
  9. Rigid, proprietary, and technically inferior interfaces, making extensions with custom functionality very difficult

What is the solution?

The solution consists of several parts:

  1. The introduction of a structural high-level element called “issues”
  2. The connection of “issues” to clinical guidelines and worksheets
  3. The support of a modular structure across vendors
  4. The improvement of quality in specifications and interfaces
  5. The lessening of dependence on overly large standards
  6. Lessening of the rigidity of current data storage designs
  7. The opening of the market to smaller, best-of-breed entrepreneurs

How do we get there?

Getting there is a multiphase project. Things have to be done in a certain order:

  1. Raising awareness of the problems and locating interested parties (that is what this blog is all about right now)
  2. Creating a functioning market
  3. Developing the first minimal product conforming to this market and specs
  4. Evolve the first product, creating interconnections with existing systems
  5. Demonstrate the advantages of alternate data storage designs
  6. Invite and support other entrepreneurs to participate
  7. Invite dialig with established all-in-one vendors and buyer organizations
  8. Formalize cooperation, establish lean working groups and protocols

Conclusion

None of this is simple, but all of it is absolutely necessary. Current electronic health care systems are leading us on a path to disaster, which is increasingly clear to physicians and nurses working with these systems. They are, in short, accidents waiting to happen, due to the problems summed up in the first section above. We have no choice but to force a change to the design process, deployment process, and not least the purchasing process that has led us down this destructive path.

I’ll spend another few posts detailing the items in these lists. I may change the exact composition of the lists as I go along, but you’ll always find the current list on the iotaMed wiki.

If you want to work on the list yourself, register on the iotaMed wiki and just do it. That’s what wikis are for. Or discuss it on the Vård IT Forum.

…um, at least as far as medical records go. SQL remains useful for a lot of other things, of course. But as far as electronic medical records are concerned, SQL is a really bad fit and should be taken out back and shot.

Medical records, in real life, consist of a pretty unpredictable stack of document types, so some form of graph database is very obviously the best fit for storage. Anything with rows and columns and predeclared types, is a very poor fit, except maybe for the patient demographics and lab data. Or maybe not even that.

The problem so far was the lack of viable implementations so instead of doing the right thing and creating the right database mechanism, most of us (me included) forced our data into some relational database, often sprinkling loose documents around the server for all those things that wouldn’t fit even if hit with a sledgehammer. All this caused mayhem with the data, concurrency, integrity, and not least, security.

I have to add here that I never personally committed the crime of writing files outside of the SQL, but squeezed them into the database however much effort it cost, but from the horrors I’ve encountered in the field, it seems not many were as driven as I was. I have, though, used Mickeysoft’s “structured storage” files for that, a bizarre experience at best.

You have to admit this is ridiculous. It leads to crazy bad databases, bad performance, horrible upgrading scenarios, and, adding insult to injury, high costs. Object-relational frameworks don’t help much and without going into specifics, I can claim they’re all junk, one way or the other, simply because the idea itself sucks.

From now on, though, there’s no excuse for cramming and mutilating medical records data into SQL databases anymore. Check out RDF first, to get a feel for what it can do. It’s part of the “semantic web” thing, so there’s a buzzword for you already.

A very good place to start is the rdf:about site, and right in the first section, there’s a paragraph that a lot of people involved in the development of medical records really should pause and contemplate, so let me quote:

What is meant by “semantic” in the Semantic Web is not that computers are going to understand the meaning of anything, but that the logical pieces of meaning can be mechanically manipulated by a machine to useful ends.

Once you really grok this, you realize that any attempts to make the computer understand the actual contents of the semantic web is meaningless. Not only is it far too difficult to ever achieve, but there is actually nothing to be gained. There’s nothing useful a computer can do with the information except present it to a human reader in the right form at the right time. What is important, however, is to let the computer understand just enough of the document types and links to be able to sort and arrange documents and data in such a way that the human user can benefit from accurate, complete, and manageable information displays.

It is almost trivial to see that this applies just as well to medical records. In other words, standardizing the terms of the actual contents of medical documents is a fool’s errand. It’s a pure waste of energy and time.

If a minimal effort would be expended to standardize link types and terms instead, we could fairly easily create semantic medical records, allowing human operators to utilize the available information effectively. All it would take for the medical community to realize this would be to raise their gaze and check out what the computer science community is doing with the web and then copy that. At least, that’s what we are aiming to do with the iotaMed project and I hope we won’t remain alone. What is being done using RDF on the web makes a trainload of sense, and we’re going to exploit that.

In practice, this means that you need to express medical data as RDF triples and graphs. This turns out to be nearly trivial, just as it is very easy to do for the semantic web. It’s a lot harder, and largely useless, for typical accounting data, flight booking systems, and others of that kind, but those systems should really keep using SQL as their main storage technique.

We also need a graph database implementation and we’re currently looking into Neo4j, an excellent little mixed license product that seems to fill most, if not all, requirements. But if it turns out it doesn’t, there are others out there, too. After all the years I’ve spent swearing at SQL Server and inventing workarounds for the bad fit, Neo4j and RDF is a breath of fresh air. The future is upon us, and it’s time to leave the computing middle ages behind us as far as electronic medical records are concerned.

Every project and initiative in healthcare IT can be classified into one of two types: interconnection and the rest.

Interconnection projects, as the term indicates, all have in common that they involve improving just the exchange of data and nothing else, by actually interconnecting two or more systems, or by creating some standard that is intended to make interconnection easier. Almost without exeption, these projects are described as “improving healthcare” and almost without exception, no effort is expended on actually describing what this improvement is or means. Practically none of these projects are preceded by a reasonable cost/benefit analysis, or any other analysis of any kind.

Then you have all other projects, those that do not primarily speak of interconnection or standard structures or terms intended to make interconnections easier. Those projects are usually, but not always, meant to solve a defined and real problem, often preceded by cost/benefit analyses, and a decent technical analysis and design.

I’d say more than 90% of all projects we see in healthcare, at least those with public funding, are of the first, “interconnect”, variety. They are, as I said, usually without any decent motivation or ground work except the desire to “interconnect” for its own sake, and usually fail according to objective criteria. But note that they are usually declared successful anyway, and since the requirements were never clear, it’s just as unclear how to deem them successes or failures, so they could just as well call them a success, whatever happens.

The remaining 10% or less may have interconnection as a component, but are founded on some other functional principle, are usually scientifically and technically sound, have a real hard time getting funding, but are often successful and useful. They don’t get much publicity, though.

So, my advice to you is: if you want to contribute something real to healthcare, avoid any project exclusively having to do with “interconnections”. If you’re more interested in committee work without much risk of ever having to prove that you actually accomplished anything useful, jump on any chance you get to do “interconnection” work.

And if you’re in a budget committee and take that responsibility seriously, jump on any “interconnection” proposals and demand a detailed clarification of what exactly will be the benefit of the project, and don’t settle for “more information must be good”. That’s malarkey. If you find any other motivation that actually makes sense, please let me know.

Glöm inte Vård IT Forum!

Fick till ett namn för min arkitektur och design, nämligen “iotaMed”, vilket står för “Issue Oriented Tiered Architecture for Medicine”. Det är alltså menat att vara ett öppet projekt, ingen egendom av mig eller någon annan och det gäller både dokumentationen, designen och eventuell kod.

Så jag satte upp en wiki för dokumentation och planering och den är tillgänglig anonymt. Vill man bidra, och det vill man förhoppningsvis, behöver man registrera. Du hittar wikin här:

iota.pro

Jag skrev om detta idag på ursecta.com.

På min ursecta blogg beskriver jag idag hur man kopplar sekretess till “issues” isf läkare, avdelningar, eller notat. Det löser en mängd problem och kan t.o.m. lösa problemet där sjukdomar som är viktiga i andra sammanhang förblir okända i nuvarande system.

Just idag fick vi vår 100:e medlem på Vård IT Forum! Stort grattis till mig själv! Tackar, tackar! Medlemslistan börjar bli ett litet Who’s Who av personer och företag inom vård IT i Sverige. Vi har mer än 350 inlägg i 80 olika ämnen för ögonblicket.

I förra inlägget beskrev jag vad vi måste gå igenom när vi vill hitta information i en journal, och då speciellt om vi inte vet om informationen finns eller inte. Mitt beskedliga förslag var ju att implementera ctrl-F, men jag gör mig inga illusioner om att det kommer att göras. Om det inte involverar en projektgrupp och nåt Europeiskt, så är det helt betydelselöst för våra “beställare”.

När jag ändå drömmer om den dag man faktiskt kommer att bry sig om hur systemen används i praktiken, så kan jag ju komma med nästa idé som med all säkerhet skulle hjälpa enormt, nämligen “tag clouds“.

En (ett?) tag cloud (“taggmoln”?) är en lista av ord som förekommer i en text. “Texten” kan vara en blogg, en website, en samling websites, hela läsbara Internet, eller en patientjournal. Först filtrerar man bort triviala ord som “och”, “eller”, “jag”, “patient”, m.m. och sen ordnar man vad som blir över alfabetiskt eller efter hur många gånger det förekommer i texten. Gör man det alfabetiskt, återger man ofta frekvensen som storleken på typsnittet eller färgen.

Hade vi haft en sån funktion i journalsystemet, så hade man i ett uppslag kunnat se om vår patient från förra inlägget hade någon känd leveråkomma. Vi hade sett ett fett ord “lever” eller “stas” eller “kancer“, klickat på det och fått upp en lista på de journalnotat där termen förekom. Trots att den som skrev notatet inte gjorde något speciellt för att klassera det som term, så det skulle fungera bakåt i tiden för gamla journaler utan vidare. Det skulle även fungera för “extern journal“, dvs texten från förra journalsystemet, men inte för inskannade dokument i “Kovis“.

Det skulle vara nyttigt att ha separata “tag clouds” för journalnotatet i sin helhet, bedömning, och diagnoskoder, men det är lätt gjort. Som det nu är ser man, åtminstone i Cosmic, diagnoskoder repeterade tiotals eller t.o.m. hundratals gånger, vilket gör översikten av diagnoskoder så gott som helt oanvändbar.

Men, säger termkatalogfanaterna då, vi kommer ju att se olika termer för samma sak, så att vikten av en term blir fördelad över flera, så ge oss mer pengar till Snomed CT. Exemplet kunde vara att “leverstas” kommer att beskrivas som “lever“, “stas“, “ikterus“, “icterus“, “choledochusstenos“, etc, etc, så att vi får många termer med låg frekvens vilket gör sökandet svårare. (Nu måste jag tillägga att även i det fallet skulle det vara enormt mycket bättre än vad vi har idag, dvs ingenting.) Flickr, Delicious och andra löser detta med “merging“. När användaren ser fler termer som borde betyda samma sak, typ “icterus“, “ikterus” och “gulsot“, väljer han alla tre och klickar “merge“, vilket gör att en av termerna (valfritt vilken) nu representerar alla tre och får vikten av summan. Enkelt som korvspad och kräver alltså inte att man från första början använt samma term. Om en användare upptäcker i efterhand att termer sammanfogats felaktigt, kan han på likartat sätt splittra upp dom igen.

Eftersom användaren omedelbart belönas för att han strukturerar upp journalen genom “tag merge” så kommer han att göra det. Det här är “What’s in it for me” principen i aktion.

Den uppmärksamme läsaren kommer nu naturligtvis redan att dra slutsatsen att det här systemet de facto uppnår precis samma effekt på den individuella journalen som en gemensam termkatalog skulle kunna ha gjort, men till en bråkdel av priset. Och dessutom utan att behöva omstrukturera befintlig journalinformation.

Taggar som skapats på detta sätt skulle sedan “per default” tillämpas på de journaler som inte taggats upp än. Eller på de termer i en journal som inte redan har fått taggar sammanfogade eller bearbetade på annat sätt. På så vis kan man undvika att taggar som sammanfogats i ett sammanhang, i en viss journal, utan vidare ändrar taggning på alla andra. Det skulle göras enligt majoritetsprincipen i stället. Alla små taggningsfel som uppstår har egentligen ingen betydelse alls. “Taggmolnet” tjänar ju till att ge en översikt över innehållet i journalen och det kommer det att göra i vilket fall som helst. Om “pankreas” nämns i journalen så kommer det i tagglistan hur som helst om ingen avsiktligt saboterat den.

Den uppmärksamme läsaren kommer nu naturligtvis också att inse att om man extraherar dessa “tag merges” ur alla journaler, så har man en “grassroots” term katalog, som inte bara är näranog gratis, men också antagligen betydligt mer korrekt och användbar än Snomed CT. Dessutom kommer vår “grassroots” katalog att hänga med i tiden, vartefter nya begrepp introduceras i praktiken, och inte när dom introduceras i någon kommitté.

Tänk er också en “current trends” lista på “tag clouds” och vad det kan innebära för övervakning och epidemiologi. Vad ni väntar på är en gåta, men när ni ändå väntar, kan ni göra det på Vård IT Forum.

Med anledning av en diskussionstråd i vårt nya Vård IT Forum, så blev jag så illa tvungen att faktiskt dyka in i specifikationerna för NPÖ. Man kan inte påstå att jag tycker systemet är berättigat, men nu när det finns vill jag ju bejaka det som en duktig gosse bör göra. Så bejakning är i fullt sving, men stötte på patrull, kan man säga.

På npö.nu hittar man det här dokumentet: NPÖ Gränssnittsspecifikation v1.0. Det var lite av en besvikelse. Det beskriver en kommunikationsdesign som inte är av den kvalitet som den borde vara.

Tråkigt nog har man baserat designen på server-side kontroll av klienters uppdateringsnivå, vilket inte är optimalt. Det är inte skalbart och mycket sårbart för förlorad synkronicitet mellan information server och client side. Man har baserat sig på tidsangivelser för uppdateringsstatus, som är en mycket fragil mekanism och kommer att ge stora problem i framtiden. Man har designat en pollingmekanism som kommer att kräva stora resurser av material om systemet kommer i allmän användning.

Jag kan ge några exempel på “failure modes” som kommer att ge problem:

  • Journalsystemet som levererar data till NPÖ index går ner och får återställas från tidigare backups. Den information som redan finns hos NPÖ är inte längre konsistent med den som finns i journalsystemet. Om informationen sänds på nytt och är identisk, hur hanteras det? Om informationen sänds och inte är identisk, hur hanteras det? Om informationen är definitivt förlorad hos ursprungssystemet, hur gör man då? Återställer i omvänd riktning, eller kastar bort den på NPÖ också?
  • Om NPÖ updates från journalsystem till NPÖ går till en proxy (mellanhand) som accepterar update, men sen förloras, hur hanteras det? Eller har man tänkt sig att man aldrig kommer att tillåta proxies? I så fall kommer man ju aldrig att tillåta konvertering heller, vilket gör att systemet bara med stora svårigheter kan delta i andra sammanhang eller system, typ Europeiska initiativ.
  • Hur hanterar man identifikation och eliminering av dubbla meddelanden? Tolererar man dubbla meddelanden via alternativa kommunikationsvägar?
  • Hur ser man skillnad mellan ett klientsystem som inte fungerar, visavis ett system som inte har något nytt att berätta, även om man inte är i direktkontakt med systemet? Även om systemet fungerar, men har slutat att leverera data till en viss klient pga ett fel?
  • Om status meddelanden som går tillbaka förloras, hur hanterar man det?
  • Om NPÖ index går ned och förlorar information från senaste timmar eller dagar, hur gör man en återställning hos sändande system? Hur talar man om för sändande system vilka status meddelanden från NPÖ index inte längre ska betraktas som existerande? Hur definierar man att sändande system ska rulla tillbaka åtgärder som gjorts pga dessa statusmeddelanden?
  • Om ett annat system läggs till som har samma funktion som NPÖ och även det vill tala med journalsystemen, behöver man bygga om hela systemet? (Enligt mig ser det ut så.)
  • Om man vill ha redundans och lastbalansering mellan olika NPÖ system centralt, som inte är synkrona med varandra, hur hanterar man det? Enligt mig inte alls, vilket utesluter dubblering för driftsäkerhet.
  • Hur hanterar man fel i datum och tid på journalinformation? Om ett notat eller ändring får noll datum/tid eller datum/tid som ligger nån timme eller dagar tillbaka i tiden, kommer det garanterat att gå ut till NPÖ i alla fall? Om det ligger i framtiden av misstag, kommer det ändå att gå ut en enda gång tills alla intresserade klienter bekräftar att det kommit in?
  • Om en ändring av ett notat har en tid som ligger tidigare än tiden som ursprungsnotatet har, klarar systemet att ändå presentera dom i rätt ordning? Verkar inte så.
  • Om ett meddelande sänds om, måste det vara identiskt till första sändning, även om det under tiden förbättrats? Hur verifierar man det, om man inte har första meddelandet? Om det inte behöver vara identiskt, hur hanterar man den första versionen som nu bara finns hos sändaren eller mottagaren, men inte båda?
  • Hur kan man göra en komplett transaktionslogg för själva kommunikationen, som kan återspelas (play back) med fullständig rekonstruktion av hela interaktionen ett system har haft med alla kommunikationsparter i ett sammanhang?
  • Hur hanterar man att information från ett journalsystem som transporteras via NPÖ inkorporeras i ett annat system, som sedan själv publicerar det via NPÖ och möjligen leder till rundgång i systemet, alternativt konflikter med något olika versioner av samma information?
  • Hur hanterar man att ett klientsystem ger vidare NPÖ information till ett annat system utanför NPÖ som sedan måste uppdateras om ursprungliga informationen visade sig felaktig?
  • Det finns ett “transaction_id” i systemet, men det hänger ihop med ett visst anrop, inte med en databas transformation, vilket det borde göra. Så inte ens det är användbart för robusthet.

Ungefär samma scenarion kan man beskriva mellan NPÖ index och klienter. Och om jag fick några timmar till på mig, kan jag komma upp med ett tiotal failure scenarios till som man inte kan hantera med den här designen. Eller som man bara kan hantera genom att bygga på lager efter lager av hanterare av speciella fall, som med tiden kommer att ge så mycket cruft att systemet självdestruerar.

Jag har en stark känsla av att svaret på mina frågor kommer att vara att klockorna måste gå rätt och att inget av systemen får gå ned. Och annat lika hypotetiskt som utopiskt önsketänkande. Eller att jag inte får något svar alls, vilket ju har varit fallet hittills. Det sista är nog mest sannolikt.

Hela designen kommer att kräva alldeles för mycket tuning, debugging, plåster, klister och tejp, plus överdrivet pålitlig hårdvara och nätverk för att inte förlora information eller orsaka inkonsistenser. Det finns inte heller något utrymme för recovery av problem i kommunikation, databaser, e.dyl. Så landstingen bör förbereda sig på att köpa ännu mer redundanta nätverk, och värstingservrar av inget annat skäl än ganska så suboptimal och helt omotiverat optimistisk design. Ingenting får ju gå fel här, för då faller det här korthuset.

Allvarligast av allt är nog att man inte gjort en Failure Modes and Effects Analysis (FMEA), vilket ju borde vara ett absolut grundkrav i något som är så här komplext och med direkta konsekvenser för patienters liv. Men om man har gjort det, så kanske man borde visa upp det dokumentet. Å andra sidan hade man nog gjort en helt annan design om man först sett en FMEA, så vi kan nog dra slutsatsen att man hoppade över det momentet. (Enligt mig kan man nog klara sig utan en formell FMEA om man bara har tagit åt sig idén bakom det, men eftersom den här branchen verkar älska standarder och feta dokument, kastar jag in begreppet i alla fall.)

Det finns många fler tekniska grepp som man inte har använt sig av, vilket ger ett ganska så klumpigt slutresultat. Jag hittar inte mycket elegans och robusthet i det här. Det är fult, fragilt och fel.

Hade man tänkt efter lite bättre och framför allt lite mer tidsenligt, så hade man gjort en betydligt enklare design med betydligt mindre failure modes och som skulle fungera tillförlitligt på betydligt billigare hårdvara. Rätt lösning har en annan “point of control”, en annan “state variable” än datum och tid, samt håller “version state” på klientsidan, till att börja med.

Sen så tycker jag ju också att hela informationsstrukturen är total overkill. Visserligen, eller just på grund av, att det är baserat på en Europeisk Standard (märk dom respektfulla stora bokstäverna), men man borde nog valt en subset som är direkt användbar. Det här ger alldeles för mycket utrymme för att den ena parten stoppar info i ett fält som den andra inte vill läsa. Därtill blir implementeringen enorm mycket för komplex och tidskrävande, dvs dyr.

Det skulle ju vara trevligt om någon med kunskap om designen kunde försöka förklara och försvara den på forumet: Vård IT Forum. I synnerhet skulle jag väldigt gärna vilja veta varför man inte använt sig av nutida tekniker som jag antytt ovan, men valt att gå den här ganska förlegna vägen. Fanns det specifika skäl till att man gjorde så, eller har man helt enkelt inte tänkt efter? Man undrar ju också varför Socialstyrelsen låter sånt här hända. Dom om någon borde väl insistera på åtminstone en basal FMEA, men gör dom det?

Det var min bejakning av “ny teknologi”. Var det så ni menade?