Sleep Medicine Epidemiology
Brigham and Women's Hospital · Harvard Medical School

Enough is enough

When too many variables is a problem

By

I have parried a number of data requests this week, which has led to more explanations about the nature of our polysomnography (PSG) datasets. Everyone’s eyes light up in disbelief when I mention that the dataset contains “around 1,200 variables” — the exact number is 1,235. For MESA Sleep, our actigraphy dataset was comprised of more than 1,300 variables. I cringe a little bit each time we share out our 2,500+ variable Excel workbook that describes the contents of our MESA datasets, which actually represents 226 printed pages of variable descriptions. Thankfully, we have started with only a subset of those variables for our “official” MESA Data Dictionary.

» More »



Extended benefits of CPAP

By

I had this article pop up on my Google News alerts this week.

Results show that the mean number of nightmares per week fell significantly with CPAP use, and reduced nightmare frequency after starting CPAP was best predicted by CPAP compliance.

“Patients with PTSD get more motivated to use CPAP once they get restful sleep without frequent nightmares, and their compliance improves” said principal investigator Sadeka Tamanna, MD, MPH, medical director of the Sleep Disorders Laboratory at G.V. (Sonny) VA Medical Center in Jackson, Miss.

We worked with the VA Boston Healthcare System on our HeartBEAT clinical trial, and a quick gander back at the data gives an impression that the veterans in that study generally had higher CPAP compliance than participants from other sites. Nightmares and PTSD were not within the scope of HeartBEAT, but it would have been interesting to explore the factors that influenced CPAP compliance rates.

We have another VA collaboration in the works that is set to kickoff this year. Maybe the next discovery lies within!



Results from the CHAT study

By

Last week the results of the CHAT (Childhood Adenotonsillectomy Trial) study were released in the New England Journal of Medicine. Kudos to all of the investigators and research staff across the coordinating centers and seven data collection sites for all their hard work.

In a Reuters write-up on the findings, Dr. Redline commented:

“Improvements in emotional regulation, attention, organizational skills, reduced sleepiness, improved quality of life including socialization and physical and emotional wellbeing were quite large, larger than we anticipated,” coauthor Dr. Susan Redline of Brigham and Women’s Hospital in Boston told Reuters Health.

Yet when the children were formally tested, youngsters in both groups performed equally well, an indication that the sleep disturbance wasn’t causing any measurable cognitive problems.

“Where you objectively measure these cognitive tasks, children can do fairly well in that motivated and structured environment” whether or not they have surgery, she said. “It shows that over a 7-month period of watchful waiting, cognition does not decline.”

I only became involved in the CHAT data management side of things when our Sleep Reading Center moved from Cleveland to Boston, and I know that many more analyses are now in store following the acceptance and publication of the primary results paper. There is much more to come from CHAT!



MESA Sleep coming to a close

By

A year ago, I made a Project Spotlight post focusing on MESA, otherwise known as the Multi-Ethnic Study of Atherosclerosis study. The study’s overall Coordinating Center out of the University of Washington has been a delight to work with, and we ceased data collection at the end of February, with the final PSG and actigraphy studies arriving from the UCLA site. Going back to last year’s post, I noted that we had scored data on around 1,300 subjects at that time. The final tally of subjects with PSG and/or actigraphy data scored is 2,240. That is quite close to the original target. Nice!

As such, with data collection closed, we turn to the data cleaning phases and look ahead to generating an analytic dataset for widespread use among the MESA investigators and interested collaborators. Luckily for everyone at the Sleep Reading Center and the MESA Coordinating Center, a weekly schedule was established way back in 2010 for dataset transmissions between the two centers. We are not fans of disseminating unclean or unkempt data, so we set about to develop a set of quality and consistency checks that would run on the PSG and actigraphy databases each week, which has distributed the data cleaning and correcting burden over time. The system held up over the course of the project and we anticipate sending off the final MESA Sleep dataset to the Coordinating Center in the coming weeks.

In the meantime, there are some other short-term (data dictionary) and mid-term goals (future analyses) that have been discussed among the MESA centers. These are where my focus now lies.

Data Dictionary

While we did share a data dictionary early on in the study that defined the main dataset (mesasleepdata.sas7bdat) we were transferring each week, we certainly did not fully flesh out all the terms and meaning of the variables contained within. Here is a small portion of that data dictionary:

2013-03-19_15h46_50

This “fleshing out” is an iterative process — it requires multiple members of our staff (typically experts on that data domain) to fill in the data dictionary and for others to review for correctness and understandability. The ultimate purpose of the data dictionary is to inform future users of the analytic dataset about each individual variable (e.g. metadata such as label, data type, position in dataset, source, format definitions for categorical variables, etc.)

The sheer size of the MESA Sleep data dictionary is already a bit daunting — 2,500+ variables across the PSG and actigraphy domains! A bit over the top, but that wasn’t my choice. Luckily, the Coordinating Center shared my thinking: a pared down dataset (with perhaps 50-100 of the most commonly used variables) would be more appropriate for the vast majority of investigators who get their hands on these data down the line. Keeping that in mind, we intend to flag these key variables and train the keenest eye on them when producing the latest and greatest data dictionary that will be submitted alongside our final SAS dataset.

Further down the line, we might consider loading these MESA Sleep data into our Sleep Portal, which would allow investigators to explore the data — construct queries, run reports, and download datasets and raw signal files for subsequent use — through their web browser. The Sleep Portal only functions with a meaningful and comprehensive data dictionary, so such is our charge.

Future Analyses — Actigraphy Intervals and PSG Signal Processing

The data we collected in MESA Sleep, overnight PSG and 7-day actigraphy, are “rich” in many ways. That is, while we can generate 2,500+ covariates pretty quickly from the scored and exported data, there will likely be additional value and meaning to extract from the data by peering at a closer level. In this case, I have initially been thinking about the usefulness of looking at the interval-by-interval actigraphy data and the signal processing tasks/tools that fall under the purview of other SRC team members (see this relevant blog post about some of our EDF tools).

As the project’s data manager, I have been slogging through the exports that comes from the Philips Respironics Actiwatch Spectrum devices that we used to collect actigraphy data. My colleagues in Cleveland had previously done a lot of great work with processing through these exports, which has been of great use now that we have over 2,100 records for MESA subjects. Here is a sample (coming from a CSV file) of what we get from the Actiware software:

2013-03-19_16h17_18

The REST, ACTIVE, and SLEEP indicators are the “intervals” that I work with. When scoring an actogram, the scorer places markings that are then exported in this format, and from which we will generate key statistics like average sleep time, average sleep efficiency, and average levels of physical activity. A key next step for me is to link these intervals up with their associated quality/reliability scores, as determined by the actigraphy scorer at the Sleep Reading Center. This task is.. let’s say, not as straightforward as I might have wished due to various quirks with the data, but we will figure it out over time.

Once that next phase of actigraphy processing is complete, we will ship off supplemental datasets to the Coordinating Center, which will open up additional analytic avenues for MESA investigators. In my post last year I expressed my excitement at the future possibilities of the MESA Sleep data. That future is moving nearer and nearer and my excitement continues to build!



Taking stock in 2013 — Slice and beyond

By

Here we are, already one month into the new year. How fast time moves!

As Remo has noted over the past month, Task Tracker and Slice have both had new releases that have added a lot of substance to the applications. Slice, in particular, has been keenly in our focus recently because we have vowed to setup and manage new projects requiring data capture and reporting in the application. I decided to setup a small project in Slice to keep tabs on our EDF conversions of the MESA PSG studies, given that data collection is coming to a close and we need to organize our raw data on the over 2,000 subjects. The conversion is ongoing, but it has led to some really cool looking (well, to me at least) reports. Here is a LaTeX-formatted PDF that I just snagged from Slice:

mesa_edf_conversions

Perhaps more exciting than our own, smallish projects are the new things just (finally) getting off the ground, like our Healthy Sleep Healthy Heart sleep clinic based out of Brigham and Women’s Faulkner Hospital and the sleep component to the larger Jackson Heart Study. » More »



Happy holidays!

By

It has been an eventful 2012 here in the Sleep Medicine Epidemiology group, with lots of fun things awaiting on the horizon in 2013. Task Tracker and Slice are in full swing and I just know that Remo can’t wait to get back to the office next year to keep the improvements going. Not to mention, I think we will finally be rolling out Rely and getting our internal reliability testing in order. Kudos to Piotr on his creation!

Jackson Heart Study and Project VIVA are finally going to get going and surely get into full swing come the new year. With them, and a few other projects here and there, I almost have a full dashboard of Slice projects!

slice_dashboard

We have received myriad contributions from all corners of our group over the past year, and here’s to looking to more innovation, creativity, and cutting edge sleep research in the years to come!

Happy Holidays and Happy New Year!



CONSORT flow diagrams and indeterminacy

By

For the past two years, I had always imagined the flow diagrams that I created to depict the recruitment and retention for our clinical trials as “consort” diagrams. I can’t quite recall what I believed about the meaning of “consort” (since the standard definitions don’t make a whole lot of sense) in this context, however, with the conclusion of the HeartBEAT trial, I was finally clued into the fact that the proper terminology is “CONSORT flow diagram”. CONSORT, of course, is an acronym for Consolidated Standards of Reporting Trials, which prescribes a number of methods to improve the rigor and quality of reporting from randomized clinical trials. The lead statistician for the HeartBEAT trial insisted we stick to the CONSORT methodology as we conduct the primary analyses and prepare to share the results of the trial with the research community and general public.

The flow diagram we used throughout the study for our monthly Steering Committee and biannual Data and Safety Monitoring Board (DSMB) meetings looks a lot like the final (more detailed) version that will be submitted alongside the primary analyses. Here is an example from mid-study:

By the end we had screened over 5,700 potential participants across the four HeartBEAT research sites. One of my persistent challenges with HeartBEAT and our other trials are the subjects that end up in the gray, “Indeterminate” boxes in the diagram above. I have spent a significant chunk of time explaining to investigators and other collaborators the meaning of these boxes; I suspect this is due to the counterintuitiveness of not being able to fully classify each and every subject into one of the other boxes in the flow diagram. In essence, this indeterminacy presents itself when a potential subject appears as if he/she exists in an in-between state, e.g. “Eligible” yet not “Enrolled” nor “Doesn’t agree to participate”. When I talk about appearances, I am speaking from the Data Manager’s perspective, in that I am limited in my reporting about all subjects in a study by what is available in our study databases. Often times the “Indeterminate” boxes are filled with subjects with incomplete data entry in the database, typically because the subject has exited the study for one reason or another. Clearly, we account for subjects more fully as the process proceeds from screening, to enrollment, and finally to randomization. The volume of screening for this trial leads to lots of unpolished screening data, often due to research staff being told to move on to more viable subjects (i.e. the pressures of meeting recruitment goals), instead of ensuring that each and every screened subject leads to a definitive endpoint.

For another trial, the numbers of “Screened” subjects is even higher. Below is a snippet from the higher-level parts of the CONSORT flow diagram, taken from earlier in the year.

Over 15,000 screened subjects! And the numbers are much higher now. The indeterminacy in this trial is likely to be more pronounced in the long run. Given that one of the aims of this trial is to explore recruitment yields from different modes (e.g. face-to-face and mailings) of gathering research subjects, it has been on the minds of the investigators and staff from the get-go to fully capture and explain the flow of these thousands of participants who have passed through the recruitment lens over the past two years. I anticipate the findings and lessons learned from this trial will greatly inform recruitment in our future work.

On a final, somewhat related note, my studies in anthropology have firmly linked the indeterminacy of our research subjects to the concept of liminality in rituals. Clinical trials (recruitment) as ritual (a rite of passage?) — I think I have found my next PubMed flight of fancy!



Catch Up Sleep

By


Project Spotlight: Starr County

By

For almost two years we have been assessing sleep data collected in Starr County, Texas. At times I have forgotten how long the project has been going on because it certainly has not been in my “spotlight” much, save for a conference call here or there and, more recently, a meeting of investigators that took place in Texas and at which some preliminary data were shared with the group. Overall, our part of the Starr County project has been very smooth sailing and will continue on into 2013.

As for the sleep data being collected, we are using Itamar Medical’s WatchPAT device. This was our first foray into using the device, though we did have some experience from HeartBEAT using Itamar’s EndoPAT setup. The WatchPAT is simpler (for the technician and participant) and less intrusive (the device is worn on the wrist/hand, see photo) than some of the other home sleep monitors we are accustomed to using for sleep data collection, though these benefits present a trade off down the line because the data are somewhat limited (e.g. less signals). Each night’s worth of data are reviewed by a scorer at the Sleep Reading Center for quality and completeness, and the software’s own algorithms are ultimately leveraged against the data to determine our key outcomes like Apnea-Hypopnea Index (AHI), Oxygen Desaturation Index (ODI), and oxygen saturation levels (e.g. percent sleep time below 90% saturation). » More »



Clinical Innovations Grant

By

Another exciting project is on the way in our little realm of the research universe! The Department of Medicine at Brigham and Women’s Hospital recently announced a Clinical Innovations Grant Award for Dr. Redline. The award is given out once every two years, and Dr. Redline’s proposal was titled: “Healthy Sleep / Healthy Hearts”. Here’s a blurb from the award announcement:

The goal is to implement a comprehensive program to evaluate, diagnose and treat patients who suffer from common yet under-recognized sleep disorders and associated cardiovascular risks factors such as diabetes, obesity, and depression. The program will establish both outpatient and inpatient sleep medicine consultation and clinical services, a group lifestyle education series, and individual counseling and support for all patients enrolled.

This will be something of a first for many of us — branching out into the clinical realm and working with patients in a non-research setting. We have a great group of collaborators in place and are grateful to have the hospital’s backing in this endeavor!