New process to leave reviews
August 07, 2016
At DocSpot, our mission is to connect people with the right health care by helping them navigate publicly available information. We believe the first step of that mission is to help connect people with an appropriate medical provider, and we look forward to helping people navigate other aspects of their care as the opportunities arise. We are just at the start of that mission, so we hope you will come back often to see how things are developing.
An underlying philosophy of our work is that right care means different things to different people. We also recognize that doctors are multidimensional people. So, instead of trying to determine which doctors are "better" than others, we offer a variety of filter options that individuals can apply to more quickly discover providers that fit their needs.
August 07, 2016
We've revamped the process for people to leave reviews on our site. Users no longer need to create an account before they leave a review. Instead, users can simply fill out a form, including leaving an e-mail address. The e-mail address is not published, but is used to confirm that the address is indeed active. Upon receiving the confirmation e-mail, users can publish the review they submitted.
We hope this simplifies the review process, and we also hope that you'll take some time to let others know how you felt about medical providers that you have recently visited for treatment.
July 31, 2016
The Health Care Incentives Improvement Institute (HCI3) released their annual report card on State Price Transparency Laws. While the overwhelming majority of states (43) received the lowest possible grade (F), a couple of states made progress since last year's report card. Only three states -- Colorado, Main, and New Hampshire -- received the highest grade possible, but two of them received a lower grade last year. Progress for health care transparency moves forward slowly, but it does seem to be moving forward. It will be interesting to see whether there will at some point be an inflection point where the rate will accelerate.
HCI3 grades states on whether they collect and publish payment data for medical services, including on whether or not the website is user-friendly. We agree that those attributes are positive, and hope that states will also consider releasing this data in machine-readable form so that other websites can innovate further.
July 24, 2016
Earlier this year, the Centers for Medicare & Medicaid Services (CMS) encountered some criticism when they prepared to release their overall hospital ratings, with even senators chiming in. CMS has already released ratings information on specific criteria, but those are often too granular for the average consumer. These overall hospital ratings are meant to simplify the consumer experience. After a few months of deliberation, CMS recently announced that they plan on going ahead with their release of overall hospital ratings.
As expected, some from the medical community continue to object. Kaiser Health News reported that an executive at the Association of American Medical Colleges protested that the ratings must be flawed given that many prestigious have low ratings. Reputations are certainly a reasonable sanity check to think about, but given how subjective they are, it's unclear whether the differences clearly point to a problem with the rating system or whether some reputations have so far been unwarranted. A more constructive approach would be for the medical community to criticize specific metrics or specific weightings -- doing so, however, would almost force them to suggest alternatives, which they seem reluctant to do. To be clear, no one thinks that these ratings are perfect (probably even people at CMS); what's being debated here is whether these ratings are a reasonable next step.
From a consumer perspective, the individual ratings have been criticized before for their lack of variation among different hospitals; that is, the vast majority of hospitals got an average rating, rendering the ratings mostly useless. It appears that only about one half of the hospitals will receive three out of five stars, suggesting that these ratings are likely to be more helpful to consumers.
July 17, 2016
President Obama published a paper in JAMA, touting how the Affordable Care Act has been successful. The paper also recommended increasing competition, whether that be via a public plan in areas that only have one or two insurers, or whether that be streamlining the process to approve biosimilar drugs. Bloomberg commented on the paper in the context of two proposed major mergers in the health insurance industry.
Competition is certainly important in reducing costs and improving quality across an industry. Ironically, though, a public option might be unfair competition if the government significantly subsidizes it, and might serve to diminish competition if the incumbents believe that they cannot compete. An alternative policy is for the government to offer subsidies (available to either all insurers or all patients) in markets where there are fewer than three insurers.
July 10, 2016
Here's a wrinkle in the idea of disclosing quality metrics: what if one of the metrics is misleading? The rate of hospital readmissions is a commonly accepted quality metric: hospitals that have more risk-adjusted readmissions are typically deemed to provide worse care. Medicare even ties reimbursement to readmissions, penalizing hospitals that have readmissions that are too high. Some doctors from Cleveland Clinic have argued that not all readmissions are bad. Their angle is provocative, showing that hospitals that had a higher readmissions rate for heart failure tend to have a lower mortality rate. The authors posit that the higher readmissions rate is because hospitals operated sufficiently well to avoid patients' deaths, although those cases were complicated enough to warrant a second visit to the hospital. The argument goes that the hospitals that could not prevent death had fewer readmissions as an unintended side effect.
I'm not familiar enough with the issue to understand how plausible this argument is, but I can very well see the possibility that some metrics need refinement. If this is a serious issue, then the caveat should be noted when patients are viewing such data. Nevertheless, I consider the collection and dissemination of this data to be a win for transparency, even if it comes with a counterintuitive disclaimer. Starting with this data and then understanding its limitations moves the conversation further along much more than if the data were never collected to begin with. Transparency gives people visibility; sometimes, though, the image can be hard to interpret.