false
Catalog
Guidelines Development Workshop
Session 1: Overview of Clinical Practice Guideline ...
Session 1: Overview of Clinical Practice Guidelines including scope, developing key questions, and inclusion/exclusion criteria for systematic literature searches
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello, everyone. This is an evidence workshop for the American Association of Clinical Endocrinology. My name is Jonathan Treadwell. And my colleague and I, Stacey Ewell and I, are from ECRI, and we are delighted to present to you our thoughts on evidence assessment. First, just a bit about who we are. My PhD is in cognitive science. I am the co-director of the Evidence-Based Practice Center at ECRI 10. There are around 10 evidence-based practice centers in the U.S. and Canada. It's been around since the late 90s. I've been at ECRI about 21 years, writing systematic reviews, and my expertise is in statistics and evidence review methods. Stacey, why don't you say a few words about yourself? Hi, everyone. It's a pleasure to be here. My name is Stacey Ewell, and I have a background in biomedical sciences. I'm currently an associate director at ECRI, and like John, I have several years of experience writing systematic reviews. And I also have several years of experience working directly with guideline groups in providing the systematic reviews that support their guideline recommendations. So, you know, it's really great to be here and to be talking with you about the process. Okay, so some of you may be wondering what is ECRI. ECRI is a nonprofit, independent organization, no funding from for-profit companies. The acronym ECRI, it used to stand for Emergency Care Research Institute, but we've branched out long, long from just emergency care with lots of other things, and so we've shortened it to just ECRI. You can think of our work in three areas, patient safety, evidence-based medicine, as well as technology decision support. And for more information, you can go to our website at ecri.org. So next I wanted to go over what is a clinical practice guideline, something we all are aware of and follow, but I wanted to just get into the meat a bit about the definition of that. In 2011, the Institute, formerly the Institute of Medicine, but now the National Academy of Medicine said that, and I have to hide that to read the definition here, clinical practice guidelines are statements that include recommendations intended to optimize that are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options. So I've put some of those words in red, and I wanted to parse out what's meant by those words in red. When we say recommendations, we mean clear statements that certain patients should be managed in certain ways, whether it involves diagnosis or prognosis, treatment decisions, or post-treatment follow-up decisions. Second, the phrase optimized patient care. This is to remind everyone that the underlying goal of these guidelines is to improve patient outcomes, such as survival and quality of life. Systematic review of evidence, and that's what we'll be talking about for the next four hours of this workshop, is essentially agreed to be an unbiased summary of the best relevant evidence. And finally, benefits and harms of alternative care options. This is to remind us to consider all reasonable alternatives and to balance their pros and cons. I did want to say a couple things of what a clinical practice guideline is not. It's not simply a list of indications and contraindications. Those words are often confused with regulatory statements or insurance coverage policies. That's not what's going on in the guideline. Secondly, money. Some people feel like a lot of the purpose of a guideline is to save money, even though that might be one of the aspects of a guideline. It's certainly not the only one. A guideline is not going to be based on a narrative review of evidence. The problem with a narrative review is the risk of cherry-picking evidence in order to support a view that you already have. So it's not letting the evidence lead the way. It's letting your prior opinion lead the way, which is often a source of bias. And finally, we're not just looking at the benefits only of a given management strategy or one option that also can result in some level of bias. I did want to share this quotation from 1996. It kind of sums up the general philosophy of a lot of people in evidence-based medicine. The practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic reviews. So it's really a joining of opinion and evidence. And this quote actually does lead to two of the themes that we'll be seeing throughout this workshop. First, that no evidence doesn't always mean that you'll have no recommendation. And secondly, even if you have lots of evidence, it doesn't always mean there will be any recommendation. So what do I mean by those terms? Well, let's consider the extreme example of parachutes. If you're about to jump out of an airplane, do you really need a randomized trial to convince you that you would be better off with a parachute? No, you don't. So sometimes no evidence is necessary to support a recommendation. On the other side of things, suppose we had a really well-done randomized trial comparing some drug to placebo, and this trial reports mild headache rates of 20% and 25%. Are you convinced now that the drug itself reduces headaches? Well, probably not. No matter how well that trial was done, there are likely many other things going on. It might make you not want to recommend that treatment. There might be adverse effects. It's a fairly small difference in the rates. So here's just a theoretical example where even when we have good evidence, that might not be enough to support a clinical practice recommendation. I also wanted to share, just as an overview, this continuum of evidence quality. Just think of all the studies out there, and they're going to lie somewhere on this continuum. On the far left in the green, we have a perfect study. Maybe there was an RCT triple blind with 0% attrition, a very well-done study on the extreme end of high quality. Contrasting with horrible studies, such as a case report on the low end of quality. We as evidence reviewers are often faced with the question, where do we set the bar for inclusion? Now, the reality is, for a clinical practice guideline, decisions have to be made. Clinicians are going to have to choose something when it comes to management patients. As we as evidence reviewers, if we end up setting the bar too high, such as requiring double blind RCTs, we're going to end up having very little evidence to summarize. The decision at the bedside at the end of the day is likely going to be more based on opinion, because there's very little evidence to be summarized. On the flip side of the coin, what if we were to set the bar low, like take lots of studies, even single arm studies, we're going to have a lot more studies to summarize. And furthermore, at the end of the day, those poorer study designs might be misleading and resulting in poor decisions. So you're kind of looking for that sweet spot of good studies, but not bad studies and enough evidence, but not too much. And another thing I did want to mention about subjectivity. So many decisions that are made by evidence reviewers, guideline panels are subjective. The term evidence-based is not intended to be equated with the word objective. And we'll be going over lots of these types of judgments in the next few hours. But what can we do, given all this subjectivity, you know, different people can have different opinions, what do we do? Well, it's pretty clear to us that the two major conceptual approaches are to first be structured, use a structured process that follows a certain systematic steps. And then secondly, be transparent and document why you decided various things. So underlying that subjectivity should be some level of clarity as to how you came to those decisions. So for the next few hours, this is sort of an overview of where we'll be going. We're going to start next with Stacy, who will be going over the scoping of the systematic review, including the PICOs, the key questions to be formulated and approaching it from that general stance. And then the second hour today, I'll be going over the grade process and all the various aspects of that and downgrades and upgrades and overall certainty of evidence. And then tomorrow, in the first hour, I'll go over a complete walkthrough example in the world of obesity. And then secondly, tomorrow, Stacy will be talking about crafting recommendations for a clinical practice guideline. So with that broad overview, I will then turn it over to Stacy. So as John said, I am going to talk about the initial stages of that process. So I am going to be talking about the scope of the guideline. So this is the clinical areas that the guideline is going to be concerned with. I'm going to talk about the developing of the key questions or the review questions, sometimes also referred to as PICO questions. And this is where we turn those clinical areas of interest to the guideline into a series of key questions that will be addressed by the systematic review. And then we're going to talk about the planning and the conducting of the systematic review. And then, as John said, he will later talk about rating the quality of the evidence, and I will talk about developing those recommendations. So here is another figure that outlines the stages of developing a clinical practice guideline. And so we are up here where we're setting the priorities of the guideline. So that involves developing the scope and generating the key questions. So when we're thinking about the scope of the guideline, we're thinking about the clinical areas that the guideline will cover. And those clinical areas are the areas in which the guideline panel wants to be able to make recommendations about. And so the target audience is those who that guideline is intended to make recommendations to. And it's also intended as the audience for which the guideline is intended to make recommendations about. So we're looking at the clinical areas that the guideline will cover and the end users of the guideline, but also the patient population that will be covered by the guideline. And so when we're thinking about scope, you know, what does it look like? So, you know, the scope is going to cover, as I said, what the clinical areas of interest are going to be, the health issues that are going to be covered. So it's also what is in the guideline and what is not in the guideline. And that's the decision part of developing the scope of the guideline is making decisions about what will be covered and what will not be covered. So prioritizing those clinical areas. And as I said, also, it's about deciding on who this guideline is intended for and who is it intended about. So the patient population that will be covered, the end users. Is that going to be primary care doctors? Is that going to be primary care doctors and specialists? And then it's also needs to consider the what about the guideline. So what will be covered in terms of the actions for the patient population? So this would be the interventions and those interventions can be broad, as John mentioned. You know, they are treatment interventions. They can be diagnostics. They can be prognostic interventions. So we'll talk a little bit about that in a moment. And so when you're thinking about developing the scope of the guideline, there's some ways that you can kind of anchor the thinking. And, you know, as I said, you know, the whole goal of the guideline is to be able to provide guidance where guidance is needed. And to be able to make recommendations where the guideline panel feels those recommendations are needed. And so. In anchoring the thinking, you know, it's thinking about, you know, are there areas of uncertainty in clinical practice? Are there areas of disagreement about best practices? Is there a debate in the literature? And these are all areas where there may be a need for guidance. And then in thinking about what John said about what a recommendation is, you know, it's a the goal of it is to optimize patient outcomes. So thinking about areas, clinical areas where those outcomes need to be optimized, you know, are there potential for improved outcomes in patient care? Is there a potential in improving how patient care is delivered? And if we think about, you know, the COVID situation, which, you know, changed at least temporarily and maybe who knows permanently how some care is delivered. And so thinking about what is the safest way to deliver care in a situation where not everyone can go to their primary care doctor's office. And so maybe they're relying on telehealth. And so that's an area that, you know, is becoming increasingly important to provide some guidance on. So, you know, optimizing that patient care. And then there's other areas to also think about, you know, a guideline is an opportunity to advance equity. So thinking about areas where there are potential inequities, where there's health disparities, a guideline can provide guidance in those areas. So if we're thinking about, you know, different populations that may be excluded because of the geographical area in which they live, if they're living in a rural area, and it may be difficult to access certain types of care, maybe guidance can be provided on that group of patients. So looking at subgroups of patients that can be addressed within a guideline. So these are all areas to think about when trying to prioritize the scope of the guideline and to make it relevant to the end users, because the goal is to provide guidance where guidance is needed. But keeping all of that in mind, because, you know, guidelines typically cover several different areas, but it's always important to think about the time and the resources. We don't have infinite amount of time to conduct a guideline, we don't have infinite resources to do it. Also thinking about the expertise that's needed in order to cover the clinical areas that you feel are important to provide guidance on. You know, and then thinking about, you know, the recent COVID situation, you know, timeliness has been very important with these guidelines. So, you know, and time is of essence, because people need guidance now. So that's something to think about when you're developing the scope. So when doing a guideline, it's not an individual effort, it's not the effort of a couple of people. And when trying to make decisions about the scope, about the key questions that will be covered, it's important to have multiple perspectives. And so having a guideline panel that includes multiple stakeholders is really important. And so those stakeholders shouldn't probably include the end users of the guideline. They should include clinicians or other service providers that have expertise in the clinical area, in that patient population. And extremely important is having a patient perspective within the guideline panel. And this can happen in different ways. You can have patients actually participating as part of the guideline panel, you can have patient representatives. I've worked with guideline groups in which they have patient focus groups at the initial stages to help inform them about the scope of the guideline or the key questions or the outcomes to include. And then it's also important to have methodologists, at least as a consultant, to help structure the systematic review that's going to provide the evidence that will support the recommendations. And the methodologist is also important in helping to formulate questions that will be answerable within the evidence space. And having a information specialist or a medical librarian involved in the guideline development is also important because they are going to be able to locate that evidence that will then supply the evidence for the systematic review. And then depending on who the audience is for the guideline, I mean, maybe this is a local guideline that's intended for a hospital, you might want representatives from that organization. So why is it important though to have all these multiple stakeholders? I mean, because they're providing different perspectives. But as John said, a guideline, a trustworthy guideline, it's the joining of opinion with evidence. So you want the opinion of the multiple individuals that are involved with the guideline, those using the guideline, those who the guideline is about. And the opinion of these individuals will come into play in multiple areas of guideline development, as you'll see as we talk through these stages. So scoping of the guideline is really reliant on expert opinion because that's where we're understanding where the need is for guidance, what clinical areas need to be, where we need to have the recommendations. It's important to have that expertise when we're developing the key questions. And you'll see later on today, we'll be prioritizing the outcomes. And also when we're developing the language and rating the strength of the recommendations. So it's a team effort, and that team effort will help in developing a more relevant guideline. So I want to turn now to sort of going from, we've identified our clinical areas. Now we want to turn those clinical areas into a series of specific research questions that we will cover in the guideline. And these research questions are really sort of to help in a number of ways. These questions are going to help to develop the search strategy that will identify the relevant evidence for the systematic review. The key questions also help us to determine the type of evidence that is needed. So for example, if we are interested, let's say that our guideline is going to focus on management of overweight and obesity. And we are interested in recommending medications for weight loss, but we want to know what medication is most effective. So that kind of, because we want to be able to, again, recommend medications that are most effective. So the way that we then will develop our key question is to include that thinking in our question will then be a comparative trial where we're comparing one type of drug to another or a group of drugs to another group of drugs. And this is different than from if we want to know what the long-term harms of those drugs are, where a controlled trial might not be necessary. So now we're looking at more observational trials. So these are the areas that our key questions are going to incorporate. They'll also help us to structure our analysis and our synthesis of the evidence. And importantly, they will help us structure our guideline recommendations because what's included in the questions are sort of the language that will then be included in our recommendation statement. And so I can't say this enough. These first parts of, or these initial stages of the guideline development are really crucial stages because they sort of set everything in motion. But there are also stages that can take a lot of time. It's easy to get derailed during these stages of the guideline development process because there is so much to consider, so many areas that maybe there is need for guidance, there is need for question development. And I want to introduce you to something that can really derail the process, the initial process of the guideline development. And this is, oh no, oh no, it is the scope creep. John and I are very familiar with the scope creep. I've had more run-ins with the scope creep in my role working with guideline panels than I would like to talk about. And we like to put this as a beast because we're trying in the scope and the key question development phase to sort of narrow the clinical areas that we can cover within a timely way within our budget. And the scope creep likes to push the limits. So he has these fingers that want to push the limits and we've got to include everything, we need to include this. The scope creep also likes to take a walk in the past and say, well, we covered this in the previous guideline, we should cover it now. Continually bringing up maybe areas that we don't need to cover because we know the evidence there and we're pretty comfortable with that standard of care. And he's got a big angry mouth and he pushes his opinion on everybody. And he just can be a really disruptive creature. So I just want to warn everyone about the possibility that scope creep will come into play. And there are reasons. And some of them are common, fairly common reasons why we see scope creep, at least these are common reasons that I have seen him come into play. And one situation that happens not often, but it does happen, is you finalized your key questions, you are finished identifying the literature, you're writing up the evidence reports, you're about to start doing the necessary work with grading the evidence, and all of a sudden you hear about this new study that came out. And it's an important study, but it's not a clinical area that you had originally considered within the guideline. So what do you do? Well, do you add a new question? Do you tweak the wording of your existing questions to accommodate this new study? Do you push back the publication date so that that study will be included in your guideline? Well, these are all things to consider, but each of these things comes with some consequences or some ramifications that you also need to consider. And that is time and resources that are involved with adding a new question or changing the wording of an existing question or changing or expanding that search date, because when you do that, you add time and you add resources. So one of the things to think about when a situation like this happens, and it has happened, I've been on a guideline where a study came out and we had to accommodate that study, we had to think about these other things, but you need to think about the importance of that study. Is it going to change the nature of the guideline that you intend to produce right now? If it's not, then maybe it's not something that needs to be included in the current guideline, it's something that you can include in the update of the guideline, and maybe at that point there will be additional studies. Because if you are going to accommodate that study, and let's say you do push back the publication dates, you don't push them back and only so that you can include that study, you will have to then do the searches for all of your other key questions. So that can add considerable time to the guideline development process. Then there's another situation that's very frustrating and happens sometimes commonly, and this is when you cannot find, you don't identify evidence for one or more of the key questions. And maybe it's because of the way you worded the question, and we're going to talk about wording of the key questions in just a moment, but let's say your question is very specific. It's looking at pain management for back pain among older adults that are 80 years or older who have never had back surgery and are currently in physical therapy. And you want to know if adding opioid treatment is going to improve their ability to function in their daily activities. So it's a very specific question and you're not finding evidence. So one of the things that people think about is like, well, let's broaden that question. Broadening the questions can be a possibility. And so maybe you think, well, maybe we should broaden it to do opioids reduce pain. And I will talk momentarily about the downside of broadening a question. But if you are going to change the scope to accommodate newer studies, you have to think about all the other key questions that you have. But again, you have to think about the importance. You have to think about the time and the resources involved in changing the scope. And I will say one thing about the scope creep. He really likes broad questions because it gives him room to grow and grow and grow. So let's talk a little bit about the wording of, and I wanted to, before I do that, I need to talk about one more reason for scope creep. And this is a very common situation that comes up. Group dynamics are what makes a guideline an amazing document because the group dynamics can make the guideline very relevant. It provides us with different perspectives, which make the key questions and the whole scope of the guideline much more relevant to clinical practice. But group dynamics can also be difficult to manage when there's a disagreement. And oftentimes we want to resolve that disagreement in a way that makes everybody happy. And so if one group wants to include this clinical area and the other one doesn't want to, but they want to include this other area, maybe to resolve that, we think, oh, let's include everything because we want everyone's voice to be heard in this guideline. And sometimes the judgments about what's important and what's not are subjective. As John said, there's a lot of subjectivity and we change our minds, but the guideline cannot accommodate everything and everyone. And people's perspectives will come into play in different areas of the guideline. So maybe a perspective cannot be accommodated within the scope and the key question development because it would make the scope too broad, but it can be incorporated in other areas of the guideline. So let's talk about key question development because one of the best sources of protection against scope creep is to have a well-developed review question or key question, or what you'll learn can also be called a PICO question because a clear, well-developed key question makes it very obvious what is in and it's not in scope of the guideline. And we have tools that we can use to help us focus our key question. In just a moment, I'll talk about those tools, but we want our question to be clear and focused. We don't want it to be too vague, too specific, or too broad. And let's just take a step back and just talk about some of the issues if we have questions that are too vague. So if we have a question, and I said previously that we might wanna expand our question in order to identify more literature. And so maybe we're expanding our question to do opioids reduce pain. So what's wrong with that? Well, people are gonna have to make assumptions if you have a question that's that vague. They're gonna have to make assumptions about, well, for who? You know, is there a specific population you're talking about in this particular question? What kind of pain? Is it back pain? Is it migraine pain? Is it surgical pain? And then compared to what? And, you know, because we're making assumptions, assumptions can be incorrect. And then if we make the wrong assumptions, we're going to wonder afterwards when we're trying to put all of this evidence together, you know, why some studies are included and why some are excluded. We're gonna have difficulty with such a broad question trying to synthesize that evidence. The evidence is going to be very heterogeneous. You know, we might have studies on this kind of pain and that kind of pain. So how do we sort of make sense of all that? And then that's gonna have ramifications on, you know, rating the quality of the evidence and framing the recommendations. Who will we frame the recommendations for, about what, compared to what? And then ultimately we'll likely have disagreements in how we rate the strength of that recommendation. You know, on the flip side, having, and we talked about this previously, a question that is too specific can lead to other types of problems. So if we have something as specific as, for male patients, age 80 plus with chronic low back pain who have never had back surgery, what is the effect of adding opioids to physical therapy and exercise on active living? It's a very specific question. So it is unlikely that we would find evidence directly addressing that key question. So one of the problems with a too specific question is that, you know, it's unlikely that we'll find evidence. And then if we do happen to find a study that addresses that very specific question, it is not going to be very generalizable to our broader population that we're probably covering within our guideline. So maybe we are looking at chronic pain, but this question is so specific to a particular kind of chronic pain. So as John said, in terms of finding the sweet spot for, you know, the bar in which we want our evidence to be, whether it's, you know, the higher quality to the lower quality, we also want the sweet spot in terms of the kind of key question that we're asking. And in order to get to that sweet spot, we do have tools. One of them, you know, some of those tools we've already talked about, you know, having expertise within our guideline panel about the clinical areas of interest to help us sort of set those priorities for what the questions need to be about, you know, engaging perspectives from our different stakeholders to help narrow those questions on, you know, so that they are very specific to what it is that we want to recommend to what we want to provide guidance on. And then there are frameworks that can help us actually with the wording of the question so that the question includes what we need that question to include in order to make our recommendation. And one thing I want to just add when we're thinking about key questions, because this is very common, you know, people tend to maybe generate questions that they think they're going to be able to find evidence for. So, you know, I've been on guideline panels where, you know, well, we want to ask this question because we know this question has been, you know, a very active area of research recently. And we don't want that question because we don't think that there's going to be any evidence on that. So, you know, that's not going to help us. But that's not what you want to be thinking about when you're developing the questions, because it's not the evidence that's going to lead the recommendation. It's the clinical areas that you have chosen that are those areas that you feel you need to make, provide guidance on. So it's the issues that will drive the key question development, because lack of evidence can be just as important as having evidence. So lack of evidence can be evidence of a gap within a clinical area that you feel is important to provide recommendations and guidance on. So that can be, you know, something that, you know, future research will be able to address. So when we're thinking of key questions, we're thinking about the importance of those questions to the patients, the clinicians, what are those issues that are important? So turning back to our frameworks for how we can develop these questions so that they are clear, there's a very common framework that's used in systematic reviews, and it's called the PCOTS framework. So the PCOTS framework is very helpful because it breaks down the components of a question. So it breaks it down in terms of the population of interest, which is the who of the recommendations and the key questions, the interventions, the what, the comparison compared to what, and then the outcomes that are important and the time points and the settings. So let's talk a little bit about each of those components of the key questions. So the P, which is for the patient population or the problem, so when we're thinking about the patient population, for whom this guideline is going to be about, we need to think about, you know, are there certain characteristics about that population that are important? You know, what's the disease or the condition of interest? So when we're thinking about patient characteristics, so, you know, if we're looking at management of overweight and obesity, are we talking about the management for adult patients? Are we talking about the management of adolescents or children? So you know, we need to be specific about the population that we are intending for this guideline to be about. And then the interventions, they can, you know, range from diagnostics, exposures, prognostics, so what is it, what is the action? What is the what of the key question and then the recommendation? And then, you know, we need to think about the comparator, so, you know, when we're thinking about the example of, you know, we want to be able to recommend the most effective medication, well, do we need to have a comparator compared to what would be standard care compared to another medication, compared to, you know, what physicians might have the choice to include as an intervention? And then the outcomes are very important to think about, and John will talk about, you know, how to prioritize these outcomes, but we need, you know, to select outcomes that are, that matter to patients, outcomes that, you know, determine or are good determinants of the success of an intervention, and that will help end users make decisions about, you know, different interventions or how to address the clinical problems or issues. And then these components of the key questions, they're sometimes optional because they're not always relevant, but they're something to think about as you're developing the key questions, and so timing, and timing can be important. When you're thinking about timing, you're thinking about when we would expect the intervention to change patient outcomes. So, you know, would we expect to see changes in the short term? Would we expect to see changes in a longer term? So depending on the expectation of when we think those changes might occur, we might want to include timing within our key questions. So let's say we're looking at different diets for weight management. You know, maybe we don't expect those diets to have an immediate impact. Maybe we would be looking at the impact of the diets in six months or longer. So if timing is important, then we might want to think about including it into our key question statements, but at least thinking about it because we will need to document these decisions later, and I'm going to talk about where that documentation takes place. Timing may be important in terms of, you know, is the intervention something that is provided in a particular setting? So is it, you know, in a hospital setting, and this is where that intervention is going to be delivered, and so guidance needs to focus on that particular setting. Is it in an outpatient setting? So you know, these are aspects of the question that you need to think about, but they might not always be relevant. So usually during the key question development phase, we go through guideline panels, we'll go through a brainstorming phase where, you know, we're thinking about, well, here's all the questions that we want to address in our guideline, and you know, the questions may not be refined at that moment into the PICO elements that I just talked about. So there is a process of refinement that goes on where we take questions that might start out as vague, so if we're thinking about our example of overweight and obesity, maybe, you know, we start with a vague question of what interventions are effective, and we already probably know why this question is considered vague, because when we say something like what interventions, well, what interventions, you know, there's a lot of different interventions, so that in and of itself is, you know, a vague statement. And so we start to think about, well, how can we make this vague question better? So maybe we're thinking about, you know, we're really interested in lifestyle changes or lifestyle and behavioral therapy, and so we might start changing the question to say, is lifestyle behavioral therapy effective to treat overweight and obesity? But we're also interested in different components of this lifestyle therapy. So this question is getting better, you know, at least we're starting to drill down to what the interventions are, at least in this area, that we're interested in. But maybe we can actually come up with an even better question. So let's just take a moment to kind of break down this one question that's getting better and just sort of see, like, what elements do we have that apply to our PCOTS framework? So we have the population, we have the condition, we're looking at overweight and obesity. We do have an intervention listed. So we're looking at lifestyle behavioral interventions. We're also interested in the components of those interventions. You know, it's a little unclear if we state a question with lifestyle slash behavioral therapy, if we're interested in just lifestyle or if we're interested in just behavioral therapy, or is this something that is, you know, a group of interventions that are typically offered together? You know, components can be tricky to address within, you know, this broad of a question. And you know, we're missing the elements of the comparator, we're missing the outcomes, we're missing the timing, if that's important, and we're missing the setting, if that's important. Now, you don't necessarily have to have every single element of this listed in the key question. But if the key question might be more specific to some outcomes than others, then you might want to list those outcomes in the question. But let's just say, you know, now here's a question that we're working with. But as we're discussing with our guideline panel group, you know, maybe we are interested in a more specific population. So maybe we're interested in obese patients with metabolic syndrome. So we're drilling down to the subpopulation of patients that, you know, we're really interested to know about in this maybe broader set of questions. And you know, maybe we're more interested in the nutrient composition of different diets. So you know, we might be interested in the carb level of the diets and the fat level of the diets. And, you know, in terms of comparator, maybe we want to know, you know, if there's one diet that we can recommend over another diet. So we're thinking, you know, maybe it's a diet where we're not changing the nutrient composition, or it's another diet where we change it in a different way. And, you know, in addition to weight loss, which is a very important outcome, and that's a very important outcome to patients, we're also interested in cardiovascular disease and how this changing the nutrient composition of the diets will have an impact on preventing cardiovascular disease. So we know that these diets are not probably going to have an impact on weight loss, at least not, you know, maybe noticeable weight loss, until maybe, you know, six months out. And especially when looking at cardiovascular disease outcomes, you know, we need a longer timeframe. So we have our timeframe, and then setting, you know, this is something that's going to take place in an outpatient setting. So now we have a more clear idea of, you know, what it is that we are asking when we're thinking about interventions for weight, for overweight and obese patients. And we can, you know, narrow that down into our PCOTS elements, and we can come up with an even better question. And so this question is, where did it go, sorry, I went ahead and I apologize, but let's just take it here, we can use the slide to help us. So we went from our vague question of what interventions are effective to treat overweight and obesity, to our better question, which is looking at, you know, lifestyle and behavioral therapy, and its effect to treat overweight and obesity, and also sort of looking at the components. But we realized that what we're really trying to answer and what we want to make a recommendation about is how different diets have an impact on weight loss and cardiovascular disease. And so we are then changing our question to be more specific, but not so specific that we feel we're not going to find any evidence. But this is on for patients, obese patients with metabolic syndrome, what is the impact of dietary nutrient composition on weight loss and cardiovascular disease. So now that we sort of have our questions developed, we need to think about how do we identify the evidence that will address those questions. And so part of that is developing an inclusion and exclusion criteria. And this is important. This is further documentation about your decision-making process. And you know, as John said earlier in the introduction, you know, what we do have are ways of making this process structured and ways of making it transparent. And so documenting why we're including some studies and not other studies is an important part of that decision-making process and a way of making the process transparent. So in addition to that, this criteria will also be used to help us identify and screen the literature that will address our key questions, and it will ultimately help us synthesize the evidence that we find in our searches. And so when we're conducting the evidence review, we need to just take a step back and remember that the scope is guiding the review questions. And the review questions will help guide the inclusion exclusion criteria. And this will, the inclusion exclusion criteria will guide the searches and the screening and the data that we abstract from the studies that we meet our inclusion criteria. And then the abstraction will ultimately guide our evidence synthesis. So all of that, you know, we need to think about these processes and how they feed into each other and ultimately give us, you know, end us at the goal of a recommendation that's optimizing patient outcomes. So when we're thinking about our inclusion exclusion criteria, you know, this is the criteria that's going to specify characteristics about the studies that we need to address the review questions. And those characteristics are going to be, you know, the characteristics that we talked about in our PCOTS framework. So you know, the population that we're interested in, the interventions that we're interested in, but it also will include characteristics about the study methodology that we're interested in. So if we're interested in comparative trials or observational trials. And then the purposes of this criteria, it's not only to identify the key questions, but it's a way also in identifying those questions that we minimize the reviewer bias and subjectivity. So we don't have complete control over all of the bias and how it can have an impact, but we have ways of minimizing that bias and subjectivity. And so when we're thinking about these inclusion exclusion criteria, they should be developed a priori. So before we start out searching for the evidence, we need to develop this criteria so that we make sure that the evidence that we're retrieving is from a systematic process and not because, you know, I know that there's going to be this study and I want the study to be in this review. So we're structuring our inclusion exclusion criteria so that we will find those studies that we think are important or that we know that are out there. So we want to minimize this potential for bias by identifying this inclusion exclusion criteria prior to starting our systematic review search. And then it also limits the potential for bias on false study designs and conduct. So, you know, we can think about the kinds of studies that best address our key questions. And those are the studies that we're going to include in our evidence base. And you know, our inclusion exclusion criteria ultimately helps us to focus our review on the review questions. We're keeping those questions in mind when we're developing this criteria. And you know, again, this is another tool that helps us to defend against the scope creep. And as I said, it promotes transparency. And something else that's really important is it promotes reproducibility of our guidelines should we then update the guidelines because it's a history and it provides us with why we made decisions at this point and if we need to change those decisions later. So just, you know, briefly, I want to go over just some of the key areas of conducting the systematic review. So when we're conducting the systematic review, we're going to be doing a literature search to identify studies to address our questions. And we want that literature search to be unbiased and systematic, transparent and reproducible. We want to include a number of different databases. So we will have multiple databases, PubMed, MBase, and a good literature search strategy will include controlled vocabularies and categorized concepts and standard concepts. And these are all sort of difficult, challenging things, you know, when we're talking about controlled vocabularies. You know, this is why consultation with a librarian is important to help us sort of develop these searches so that we optimize the search to be able to get the evidence that, you know, will address the questions that we are asking. Our evidence synthesis is part of the systematic review process, so taking the evidence that we identify and synthesizing that so that we can then, you know, begin to look at the evidence and decide on, you know, the strength of that evidence, the quality of that evidence, and how we synthesize that evidence, whether it's quantitative, you know, through a pooled analysis like a meta-analysis or narratively, you know, kind of depends on the type of evidence that we get, the expertise that we have, the quantity of the evidence, and the completeness of the data. And there are a number of software packages that we can use to help us synthesize the data, you know, in consultation with the methodologist will help us to, you know, make decisions about software packages, about when it's appropriate to do a pooled analysis or not. But one of the software packages that's available is ReviewManager, and it's referred to as RevMan, and ReviewManager is adaptable to a number of the grade tools, and John will talk about some of those grade tools that are used when you're assessing the quality of the evidence. So I just wanted to mention that. So in summary, we talked about a lot. And I just want to remind everyone that the scope and the review question stages of the clinical guideline process is very crucial. We have to be aware of the scope creep. We have tools that can help defend us against the scope creep. And so we don't want our questions to be too vague, too specific or too broad. We want our questions to be clear and focused so that we can protect ourselves against the scope creep. There's available frameworks like the PCOTS framework that we can use to help guide key question development and the inclusion exclusion criteria that we develop to help us identify the evidence actually is a way of minimizing bias in study selection and a way of being transparent. It's going to guide our literature searches, our screening, our data abstraction, and then synthesizing the evidence. It can be done in multiple ways, quantitatively, narratively. This depends on the data, and a lot of this depends on the time and the resources. So with that, I want to turn it over to my colleague, John.
Video Summary
In this video, Jonathan Treadwell and Stacey Ewell from ECRI present an evidence workshop for the American Association of Clinical Endocrinology. They provide an overview of evidence assessment in clinical practice guidelines. Jonathan Treadwell introduces ECRI as a non-profit organization focused on patient safety, evidence-based medicine, and technology decision support. He explains that clinical practice guidelines are statements that include recommendations based on a systematic review of evidence and an assessment of the benefits and harms of alternative care options. Treadwell discusses the continuum of evidence quality and the challenges of setting the bar for inclusion in guideline development. He emphasizes the importance of structured processes and transparency in decision-making to minimize bias. Stacey Ewell then discusses the scoping of systematic reviews, the development of key questions, and the planning and conducting of systematic reviews. She provides examples of how key questions can be refined using frameworks like PCOTS (population, interventions, comparison, outcomes, timing, and setting) . Ewell highlights the importance of clear and focused key questions to guide literature searches, study selection, data abstraction, and evidence synthesis. The video concludes with an overview of the inclusion and exclusion criteria, and the potential challenges of conducting systematic reviews. Overall, the video provides a comprehensive introduction to evidence assessment in clinical practice guidelines, with a focus on scoping, key question development, and systematic review processes.
Keywords
evidence workshop
clinical practice guidelines
patient safety
evidence-based medicine
systematic reviews
key questions
PCOTS framework
literature searches
data abstraction
×
Please select your language
1
English