At DocSpot, our mission is to connect people with the right health care by helping them navigate publicly available information. We believe the first step of that mission is to help connect people with an appropriate medical provider, and we look forward to helping people navigate other aspects of their care as the opportunities arise. We are just at the start of that mission, so we hope you will come back often to see how things are developing.
An underlying philosophy of our work is that right care means different things to different people. We also recognize that doctors are multidimensional people. So, instead of trying to determine which doctors are "better" than others, we offer a variety of filter options that individuals can apply to more quickly discover providers that fit their needs.Got questions?
There's an interesting blog entry where Dr. Liu comments on Dr. Ofri's lament of being measured solely on clinical outcomes and not on interpersonal skills. Dr. Liu uses his own practice as a positive data point that it is indeed possible to have great clinical outcomes and great patient satisfaction.
What's interesting to me is that health care institutions sit on mounds of performance data (both clinical outcomes and patient satisfaction scores), but essentially none of that information is made available to patients when they select a doctor. You can argue about the meaningfulness of such outcome data, but the fact that health care institutions themselves measure their doctors' performance suggest that at the very least, the institutions believe there is some merit to the measures. If so, why not release it to the patients so they can factor it into their decisions regarding whom to visit for care? I suspect that responses would likely fall into one of the following camps:
1) "Patients won't know how to interpret the data or won't care" -- this might be true, but I think it's a tremendous stretch to say that all patients will be unable to interpret the data or won't care. Why not release the data so that patients who do care can make better decisions?
2) "We monitor our physician performance internally and ensure that all physicians are up to our standards" -- in other words, "don't worry, we've got you covered." Like the first response, this response is awash with paternalism. The institution assumes that its standards are at least as high as the patients' standards. Additionally, since there are multiple dimensions, not releasing the data deprives patients of the opportunity to select a doctor according to his own preferences. For example, one patient might be willing to put up with lower clinical quality indicators in favor of better patient satisfaction scores; or the reverse might be true. Either way, patients don't get the opportunity to choose for themselves.
3) "This is an issue of doctor privacy" -- this is probably the strongest response, and the argument is that doctor privacy trumps patient interest in his own health. Maybe. It's a little like a politician saying "there's no need to looking at my voting record -- I've got your interests covered." I'd bet that if the economics change such that disclosing performance numbers brings in significantly more revenue, health care institutions will suddenly find a new love for transparency, claiming it as a cherished virtue all along.
Who will be on the patient's side?
As you know, our primary focus at DocSpot has been to connect you with individual health care providers. This week, I had hoped to unveil a new service that would allow you to search for hospitals, but the final touch-ups have taken me longer than I expected. Sometimes the smallest segments of a product can take the longest amount of time. Such is the nature of development.
In this case, I discovered that one of our sources of data was not as tidy as we had thought. Since we deal with publicly available data, we don't expect everything to be nicely sorted and packaged for us. That's what our specialized "robots" are for. However, there are certain times when the data proves to be incorrigible, and we must either reject it as a primary source or dispose of it altogether.
I had relatively high expectations for Medicare's "Providers of Service" list; albeit publicly available, it is not free. And at first glance, it seemed polished and straightforward to integrate. Then when I ran some diagnostics, I met with the worst nightmare of any engineer tasked with data management: duplicates. Multiple hospitals with the same address and same name - but different data. I had no idea which profile was correct, and the data's documentation didn't give me any indication of how to resolve the issue, let alone mention possible redundancies.
So, as engineers are wont to do, I started looking for patterns. I found a reference number that might link one duplicate to the next, a date which seemed to indicate when the profile was last updated, a code that suggested a hospital had been shut down, a category that appeared to single out duplicate entries. In the end, the relationships seemed too arbitrary, and I hadn't even rooted out all the redundancies. One pair, in particular - two profiles for Broughton Hospital, in North Carolina - deigned to mock my efforts: differing by only one or two data points, they matched on every single metric I used to differentiate between duplicate profiles.
After almost giving up on this rich source of data, I finally discovered another Medicare file (on a completely different section of their website) that identifies the unique entries in the problematic source. Problem solved. The question remains - will there be yet another set of finishing touches? Time will tell - such is the nature of product development. In the meanwhile, keep checking our blog for updates, and let us know what you would like to see in our upcoming hospital product.
A Wall Street Journal article reported on the Department of Health and Human Services' requirement that health insurance companies report details of each policy that they underwrite. A key element of this requirement is that insurance companies must report the details via a standardized form, which allows for more meaningful side-by-side comparisons. At last, increased transparency coming soon to the health insurance market (supposedly starting March 2011).
Having bought health insurance on the individual market, I look forward to the increased access to this information. Who doesn't like the idea? Not surprisingly, insurers were said to be "concerned about the potential cost and administrative burden of the new requirement." That has a familiar ring to it -- didn't mobile phone operators say the same thing when regulators required phone number portability?
With increased access to information like this and other tools in adjacent spaces, hopefully patients will be able to make better decisions about their health. That's what we're working towards.
In a prior blog post, I referenced the balance between privacy and transparency in our quest to empower patients to make better decisions. The discussion around patient reviews seemed sufficiently complicated that I decided to address that separately. So, picking back up on the topic, the question is whether we would like to allow providers to hide certain reviews.
For us, there's actually not much of an issue if we knew that all reviews were true (and objective). If that were the case, the answer would essentially be the same answer to the question of whether or not to display something like disciplinary actions: yes, we can understand why providers might want that hidden, but no, we wouldn't want to hide that. The question becomes more complicated in light of allegations that have surfaced that people write false information in their reviews (whether that be one provider writing a false review for a competitor or whether that be an unhappy patient making up facts to strengthen his case).
It'd be interesting to read a study on the estimated number of patently false reviews (although, I'm sure that such a study would be expensive and difficult to pull together). Overall, patients very much look to see what other patient say -- for a while, that was the most requested feature. The sense that we got in talking with people is that patient reviews needed to be taken with a grain of salt, much as reviews for other products and services. While it might be easy to fake one or two reviews, it seems unlikely that someone would go through the trouble of creating ten different reviews for the same provider. Impossible? Nope. Improbable? We think so. And that brings us to our take on whether or not to show patient reviews: a few reviews is not a lot of signal, but the aggregation of many reviews that voice the same feedback over a long period of time is likely to mean something. Interestingly, someone just wrote in with the same opinion.
And for those who are curious, we try to offer something extra when people leave a review on our site. After typing in their feedback, we give users the opportunity to upload some documentation (e.g. an explanation of benefits form or a receipt) and request a verification. If everything checks out, we'll mark that review as a "Documented Encounter" -- it's a little like an online retailer saying that a review is from a "Verified Buyer" since they know the online identity of the individual and as well as his purchase history. Have thoughts on this topic? Let us know.