Texas A&M Architecture For Health
Dr. Kirk Hamilton
Season 2023 Episode 15 | 55m 8sVideo has Closed Captions
Architecture for Health Fall 2023
Architecture for Health Fall 2023
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Texas A&M Architecture For Health is a local public television program presented by KAMU
Texas A&M Architecture For Health
Dr. Kirk Hamilton
Season 2023 Episode 15 | 55m 8sVideo has Closed Captions
Architecture for Health Fall 2023
Problems playing video? | Closed Captioning Feedback
How to Watch Texas A&M Architecture For Health
Texas A&M Architecture For Health is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship- Well, good afternoon and welcome to the Friday lecture series, Architecture for Health.
It's great to have you with us today.
Our guest speaker today is Dr. Kirk Hamilton, known to many of you, but let me just say a little about his background for those that have not heard.
Kirk was trained in architecture at the University of Texas where he got his BARCH.
And he went into practice, and some years later decided it would be useful to understand how the organizations he was serving operated.
So he went and got his master's in organizational development at Pepperdine.
And after that decided, "Wouldn't it be interesting to study a little bit about the clinical behaviors?"
And he got his PhD in nursing at Arizona State.
So he's had a wide variety of training, lots of multidisciplinary, all rolled into one person.
Now, let me tell you just a little bit about what he's done.
And I met Kirk over 40 years ago.
He's been a longtime friend and colleague, so this is really a pleasure for me to do to introduce him.
Kirk was professor, is now currently professor emeritus from A&M where he spent 18 years including time as the Beale Endowed Professor of Health Facility Design.
Previously 34 years as a hospital and healthcare architect, fellow emeritus and past president of both the American College of Healthcare Architects and the AIA Academy of Architecture for Health.
I might add co-founder of the American College of Healthcare Architects.
And he is currently a fellow of the American College of Critical Care Medicine.
Yes, as an architect, fellow in the American College of Critical Care Medicine.
Founding co-editor of the quarterly journal, Health Environments Research and Design Journal.
We all know the HERD journal.
We love it, respect it, and cherish it, now entering its 17th year of publication.
So would you please help me welcome Kirk Hamilton to our podium?
(audience applauds) Welcome.
- Thanks, Ray.
Thank you, it's a pleasure to be here.
I've got an unusual presentation for you today.
I think anybody who is involved with our center recognizes the notion of an evidence-based design and evidence-based project.
Well, we are accustomed to teaching people how to find evidence and to interpret its impact on your specific project.
And we're also interested in teaching people how to turn their projects into research by evaluating the outcomes that are associated with it.
So we have a way to use evidence and a way to create evidence.
That's the heart of what we teach.
Well, I'm gonna talk to you today about a whole 'nother realm, and that's the notion of creating evidence-based guidelines.
The ones I'm gonna talk to you about are intensive care unit design.
It's for the Society of Critical Care Medicine.
It involves understanding a way to publish a new version.
So what you see on the screen is the 2012 edition.
At least four people who were a part of that author group are currently working on the 2024 version.
We are using credible, relevant evidence.
The definition that I have had multiple times is that evidence-based design is the conscientious, explicit, and judicious use of current best evidence from research and practice in making critical decisions together with an informed client about the design of each individual and unique project.
And, of course, that means it's all about making decisions, it's treating every project as unique and individual, and it's taking advantage of the current best evidence, which is, of course, changing all the time.
So that's the foundation and the framework behind which we believe we ought to be using the best information we have available.
Well, we know that contemporary ICU patient rooms at different places around the world are rather extraordinary settings.
They have advanced the state-of-the-art of ICU care in many ways.
And that also has to be factored into the notion of what we might write if we're gonna write new guidelines.
Well, we have a guidelines task force.
It composed of some 25 plus subject matter experts from around the globe.
Different professions.
There are physicians, nurses, architects, engineers, therapists of various types and so on.
So the idea is we are to replace the 2012 guidelines.
The process began back in 2022 and the target is to publish next year in 2024.
Now, at the bottom of the slide, you'll see something, a PICO question.
This one is PICO question 4.1, which is one of 15 PICO questions that we were using as a part of the process.
And this particular one is, "Should we recommend designing ICUs with outside of room monitoring and control of devices?"
So the method of this PICO exercise, as you can see in the upper left, you form these questions.
They're questions that allow you to go to the relevant research literature and find information.
You select the outcomes that are associated with it, you determine whether outcome is a critical outcome or it's an important outcome, or maybe it's not important.
You go to the literature, which you see in the middle with the symbolic pages, and you produce, as a result of what you find, you produce evidence profiles that tell you about that question, "This is what the evidence is telling us."
And you have degrees of high to low in terms of the quality of that material and so on.
Then on the horizontal line that runs through the middle of the slide, the top part is finding the information.
The bottom part is, "Okay, what are we gonna do with it?"
And so you rate the overall certainty of the evidence across the various outcomes.
You see as the green arrow points down from the right-hand side.
And then you go across to use the guideline panel.
And they deal with recommendations that move from evidence to recommendation.
This is much like the process you would use in a project where you're interpreting what the evidence tells you as it relates to your project.
And so you look at the benefits and the possible downsides.
You look at how certain you are about the evidence, the values, preferences, the cost, feasibility, the acceptability to the field, and equity, you look at whether or not accepting this concept and introducing it into projects means that some group might be disadvantaged.
And then, ultimately, you formulate the actual recommendations.
You write the actual text, and it eventually becomes the guidelines on the lower right.
Well, the people who are doing this are around the world.
We conduct our work by Zoom and email.
The two people on the upper left are methodologists.
They're both ICU physicians.
One is at McMaster University in Canada and the other one is at the University of Calgary in Canada.
Three architects are on the screen who actually worked on the previous 2012 guidelines.
The whole idea of having a task force of more than 25 people is that the work is done by consensus.
No single person has the ability to create the text and insert their personal opinion into the work.
So as I mentioned, there are 15 PICO questions.
Well, how did we even develop which ones we would use?
I think you probably would find it impossible to read this, but at least what you see here is there's a round one, a round two, and a round three of voting.
And you can see that the median voting is listed in a number.
It's a 1 to 10 number.
And so you're seeing in the first, round one, there's two that scored an eight and two that scored a seven and so on.
So the ranking of how important the task force feels that question might be.
So, example, at the very top it says, "Do ICUs designed for high visibility, for example, concentric pod designs, open designs, direct lines of sight between staff/patients, versus standard ICU designs without high visibility, for example, linear rooms, improve patient outcomes, safety events, staff satisfaction, time until critical interventions?"
Well, that one scored quite high all the way through.
And as you would go through this list, you would eventually come out with a list like this.
You can see here 14 and two of them under number two are intended to be combined because they were similar enough.
So if you look at the Round 3 column, it shows the median.
So the scores are in the eights and sevens.
This was the way in which it was narrowed down.
Any of the proposed research questions that would go out and have literature search done and so forth that scored lower than seven simply didn't make the cut.
You have the panel looking at the summary of the 2012 guidelines.
We have an outline of what was in the previous one.
We developed a draft, a list of these PICO questions.
We have online meetings to refine those questions.
It includes voting.
And then we ended up with a final list of 15.
McMaster University has health science librarians that developing the database and going out and finding the material.
And they have one of the methodologists is at McMaster.
The other one, previously was at McMaster, is now at Calgary.
And they go to all of the known sources like Ovid and Embase, CINAHL, The Cochrane Library, dissertations that are found in ProQuest and so on.
So PubMed, you know, anything that you can imagine.
Well, the literature search is done using the PRISMA methodology.
You can see here the diagram of it.
Again, you probably can't read all the text, but note that 42,000 papers were found as a result of the original search.
And then by the different ways in which duplicates were removed or studies that didn't have the right kind of information, studies were excluded because of non-English or abstracts only, or the wrong outcomes, the wrong population, the wrong environment and so forth.
It ends up with 212 papers that were considered as the meat of what the task force is supposed to interpret to include in the guidelines.
So the way in which literature is appraised, of course, we use a process called GRADE that I think comes from The Cochrane Library.
I may have that wrong.
But it basically goes from very low to high.
Your certainty about whether or not the content is accurate and is reliable versus at the very low end, the true effect of the intervention is still a bit uncertain.
So, you see, we are asked to evaluate what we are reading.
And we ended up having to read all of these.
I personally read something in the neighborhood of 5,000 abstracts to get it down toward the 212 that ultimately were selected.
So every member of the task force had roles in doing that and it had to take at least two people evaluating any paper.
So quality of the evidence is based on, often described as an evidence pyramid.
At the bottom, you can see expert opinion, which is the type of thing that practitioners provide.
They've worked in this area, even clinical practitioners, architectural practitioners, engineering practitioners.
All of that personal experience becomes expert opinion.
And it is relevant.
It just happens to be nowhere near as relevant as the higher levels.
You see the next three levels, including randomized controlled trials, cohort studies, case-control studies.
And this is unfiltered information that comes directly from the study itself.
Well, recognize in the environment world, we don't often do randomized controlled trials.
It's very hard to have a new building and then have a control building that is the old one.
We don't always have that ability to compare the way they would do in a scientific trial.
The preponderance of the evidence that's in the environmental domain basically falls on the lower end of the pyramid.
Now, as you get up to the upper levels, you're dealing with filtered information where you use synopsis, synthesis, and, ultimately, systematic reviews.
And so systematic reviews are bringing together material from multiple sources, multiple papers, multiple research studies and so on.
And as a result of their systematic character, the data that comes out of that is extremely reliable.
So there are instances where what we're getting is some kind of systematic review, raises it above the shortfall that environmental research often has by not doing randomized controlled trials.
Well, we've been talking about PICO questions a lot.
A PICO question is a way to organize your thinking about research questions.
So the P stands for the population that you're studying.
I stands for the intervention that you're considering as a possibility.
C is the comparison with the lack of the intervention.
And then O is the outcome that it comes at.
An example would be for the population of adult ICU patients, that's the population, does outside room monitoring and device control, that's the intervention, when compared to the absence of such capabilities, that's the C, comparison, produce improved clinical outcomes?
And so you could list a variety of clinical outcomes that you would want to know about.
But that's an example of a PICO question.
And I'm only gonna use the one.
As I said, there are 15 that we're working with and we're almost finished with reviewing all 15.
But I thought this one was gonna be an easy one to explain.
So if you remember, COVID has produced a whole variety of situations where we see equipment being moved outside of the patient rooms in order to reduce the number of times that staff had to enter and expose themselves to the pandemic conditions.
Shortages of personal protective gear required reduced staff exposure.
And so there are instances where people drilled holes or ports in the corridor wall so that they could bring umbilicals and wires to machines that were in the room, or bringing the machines out of the the room, a variety of techniques.
As a result of that, you see corridors that began to have an awful lot of stuff like the IV poles that you see in the image on the right.
We even have examples of telemedicine that is being used like a control tower at an airport where people are outside of the hot zone where the patients are located who are helping the staff that are inside the hot zone working.
And so you see examples from corridors in acute hospital situations.
Here, you see an example from Israel where the control station has people that are not fully gowned up in the protective gear and are able to work with all the computer information to assist the staff that is inside the hot zone.
So what about our PICO question now that we know what we're asking about?
Well, they only found three studies.
The study summary (chuckles) is that there's a paucity of quality literature.
Only three papers are included, and they talk about telemonitoring, they talk about reducing errors, increasing efficiency, all of that sort of stuff.
But most of it is indirect evidence.
Five studies were excluded and a lot of the primary literature is observational.
Again, there are no randomized controlled trials.
Well, now we use a computerized process that the methodologist walk us through as the task force to answer the questions.
So here you see that evaluation of the literature found.
They only found three studies, they're listed at the bottom.
We go then to the next question, "Should outside-room monitoring and control of devices versus inside only be used for intensive care units?"
So this simply repeats the PICO question.
The population, P, is intensive care units; intervention, outside-room monitoring and control of devices; comparisons, the absence of it, inside room only.
And then outcomes, the outcomes, there were a whole variety that the task force looked at, but there were things like patient safety, hospital-acquired infection, length of stay, mortality, staff injury, and cost, among others.
So then the next screen they ask us, "Well, is this problem a priority?"
And the task force answered yes, you can see on the left hand side.
And the idea was that it has the potential to decrease adverse events, to reduce errors, improve efficiencies, it may lead to fewer delays.
And if you look on the right-hand column, these additional considerations was, well, we're being able to take advantage of technology advances.
And the bottom one, we are getting some lessons from our experience with the COVID situation that suggests that this is doable and that it is a priority.
So then the screen changes and the methodologist takes the group to the next one.
Let's talk about the desirable and undesirable effects of the intervention.
How substantial are these effects?
Well, on the desirable one in the top, it's called small.
I've circled that in red.
And the possible effects are, well, it may reduce error, it may reduce staff fatigue, it may increase staff satisfaction.
But those desirable effects are relatively small, all things considered.
How about undesirable effects?
Well, again, small was what the group chose and they felt that it was a potential reduction of hands-on observation by staff.
It's tough enough to be an ICU patient other than having to see a person in this sort of space suit or not being able to have people in there at all including your family members and so on.
The potential reduction of human interaction was considered to be an undesirable potential effect.
Next question was, what's the certainty of the evidence?
What's the overall certainty of what is available in the effects?
Well, the certainty was very low because there was very limited evidence.
There was not much, other than people's personal experience of having observed these things, there's not much in the literature.
How about the values?
The values, is there an important uncertainty about or variability in how much people value the main outcomes?
Well, we've circled here possibly important uncertainty or variability, which is repeated in that middle column.
And then on the far right, a recognition that maybe its importance is gonna be different for different stakeholders.
So will physicians feel differently from nurses who will feel differently from patients who will feel differently from family members?
And so the recognition that it could vary depending on who you're talking about becomes something that the group was discussing.
Well, how about the balance of effects?
If you take into account all the effects we've discussed, does the balance between desirable and undesirable effects favor the intervention or the comparison?
Meaning the lack of the intervention.
And so it's circled as probably favors the intervention.
You see that in the larger type.
And then you say, "All right, well, we're probably favoring the intervention.
Now, what about the resources required to do it?"
It turns out that if you wanted to know how large the resources might be to achieve what you are talking about, well, we as experts said, "Moderate costs."
Punching a hole in the corridor wall and maybe having slightly wider corridors is pretty easy if you're in new construction.
It's not so easy if you're in renovation.
But the main point here is it was not addressed by any of the studies.
So none of the literature answered that question.
We had to rely on the opinions of the task force members.
What is the certainty of evidence about the required resources?
In other words, is it going to cost a lot in time and effort?
And the major one is cost in terms of money.
And it is not addressed by any of the studies that were found.
The task force believe that it was not an unusual expense that should be totally avoided, although please note, it's easier in new construction than in renovation.
And then cost-effectiveness.
So we've talked about resources, now they wanna know does the cost-effectiveness of the intervention favor the intervention or the comparison?
And the answer was it varies.
It was not addressed by any of the studies.
It varies by the type of unit that pediatrics, or trauma, or cardiac and other types might have different answers.
It varies by the type of intervention.
Are you talking about just a simple porthole in the side of the corridor wall or are you talking about creating a studio style center for observing into other areas and so on?
And it, of course, varies in new construction versus renovation.
So equity is a question that the society tries to make sure we are always aware of and always answering.
What would be the impact on health equity if we implemented the intervention?
The answer that the group had is, "Probably no impact."
The fact that you have moved equipment to the corridor instead of having it in the room, it's not gonna disadvantage any one group.
You're not going to discover that the handicapped, or veterans, or persons from a particular ethnic group, nobody is going to be differentially affected by that decision.
Even though it wasn't addressed by any of the studies, there didn't seem to be any advantage or disadvantage to a specific population.
And so as a result, there was probably no impact on equity.
Well, all right, now then the next questions were about acceptability and feasibility.
So acceptability, is the intervention acceptable to the key stakeholders?
So would physicians, nurses, patients and their families accept this as a change?
Well, we said yes.
It was not addressed by any of the studies, but we said it was likely acceptable to the staff based on their COVID experience.
The fact that they had almost all had addressed some variation of this question or frustration because it was not able to be done suggest to the task force the expert opinion, if you will, that, yes, such a change would be acceptable to the populations concerned.
Well, how about feasibility?
Is the intervention feasible to implement?
Can it be done?
And the answer was yes.
It was not addressed by any of the studies, but the task force said, "It was really clear that feasibility had been demonstrated during COVID."
It was being done by a variety of people who simply demonstrated that it was possible.
So, yes, the intervention is feasible.
So this might seem like a complicated process.
Remember, we're going through these computer screens and all of the group is listening and sort of following and answering the questions as a group and having to come to a consensus and so on.
And there are a lot of questions there, questions about the problem and the desirable and undesirable effects, the certainty about the evidence, the values that are impacted, the balance of all these effects, the resources and the cost and so on, all the way to the point of equity.
Well, at the end, there is a summary of judgments.
So depending on what the group had decided at any one of those questions, you see a screen like the one you see here where the first four columns go from no to probably no to probably yes to yes.
And then the last two columns were varies or don't know.
Two possible answers.
And then there's a middle column that seem to be about whether you favor the intervention or whether the savings or cost are dealt with.
So what you see on the screen, the ones that have a yellow overlay, are the ones that were chosen by this particular group for this particular PICO at this moment.
So where does that leave us?
Well, it suggests that there is a type of recommendation.
There are five choices.
So you see on the extreme left it says, "Strong recommendation against the intervention."
Whatever you do don't do it.
The second one says, "Eh, conditional recommendation against the intervention."
You know, sort of the maybe.
And then the middle one is neutral.
It's basically conditional recommendation for either the intervention or the comparison.
And then as you keep going to the right, you have a conditional recommendation for the intervention.
And then the extreme right, a strong recommendation for the intervention.
So if we were doing a drug trial and it turned out that the drug was unbelievably effective, you'd pick the one on the extreme right.
If the drug was no better than a placebo, you would pick the ones on the left.
Well, we're talking about physical environments here and using evidence-based models for that.
Well, ultimately, for the PICO that we're talking about today, the task force has chosen conditional for the intervention.
So they're basically saying, "Yes, it works, it's acceptable, it's feasible.
It's not unawful in terms of cost, if you're dealing with new construction.
But it's not something where we feel so strongly that we're gonna say it's a mandate that from here forward, every single place that's built for an ICU should do this."
That's the strong recommendation.
We're not at that, we're at the, "This looks pretty good to us."
The conclusion then says, "We suggest designing ICUs with the capacity for monitoring and controlling devices outside of patient rooms."
It's a conditional recommendation, very low certainty.
And the justification, well, while the ability to monitor and control devices outside the room seems to have face validity, as a beneficial intervention, there is is very limited evidence to guide practice here.
There may be important implications for staff and patients if outside-room technology is used and more research is required to determine the impacts of using this approach.
So, ultimately, we're now at the end of the process of coming to a conclusion about that one PICO.
And we're recommending that they do provide capacity for monitoring and control outside of the patient room.
It's a conditional recommendation, very low certainty.
It does have face validity, but there is so minor amount of evidence that more research is required.
Now, how often in the evidence-based world do we hear those words?
"More research is required."
(chuckles) In the medical world, there is so much more evidence about clinical items than there is about environmental items.
And so we are, as designers and architects and practitioners working with the physical environment, we are dealing with a much smaller evidence base than is often expected in the medical world.
So what is it that the guidelines are going to say?
Well, the task force now has to draft that text and arrive at a consensus with a vote.
So the leadership of the task force consists of two groups.
And each group is going to essentially draft half of the content.
And then they will be the peer review for the other half.
So they will exchange their work.
They're either the original or they're the peer reviewers.
So everything gets reviewed twice.
Ultimately, when the task force says, "We agree, this is the text," it's not finished, it has to be approved by the regents of the Society of Critical Care medicine.
And if it is approved, it then goes to Critical Care Medicine magazine or journal.
It's a peer-reviewed journal that is published by the society and the intention is it is targeted for being published in 2024.
Now, when I talked about those two leadership groups, each of them is composed of three people.
There's an architect, a physician, and a nurse in each of those two leadership groups.
And half of those people are people who had worked on the 2012 edition.
That's the idea.
So between now and sometime in the spring of next year, we're gonna be doing a lot of writing.
We've been given a limit in terms of numbers of words that will be in the official guideline publication.
But we've been given the freedom to have a number of supplementary material capability that we may have to use.
It turns out that everybody on the task force currently believes (chuckles) it's gonna be really hard to keep it down to 3,000 words.
So we'll see what happens.
In fact, now that I think about it, I'm gonna go back and actually count the words in the 2012.
I have a feeling that that may be past 3,000.
So my suggestion is, if you find this at all interesting, stay tuned next year, we will actually produce the guidelines based on these PICO questions, based on revising material other than the PICO questions.
The PICO questions are the ones that you send out for evidence searches.
But that doesn't mean we might not have a comment in the guidelines about locker rooms that didn't get dealt with as a PICO question.
We may have items that were in the previous guidelines that will continue to be relevant and there's no need for us to have searched the literature to revise it.
There's a combination of things that is gonna be written about these 15 questions and the stuff that is written for other reasons.
I just simply would suggest that you stay tuned and that if you have the opportunity to get involved in something like this, it's a fascinating process.
I feel like I've learned a tremendous amount, and I'm one of the people that's considered a so-called expert.
I encourage you to involve yourself in anything that smacks of this.
So thank you.
(audience applauds) - [Ray] Questions for Kirk?
- I think we have time for some questions.
- [Ray] We do.
- I have, just fascinating and thank you.
The PICO questions, where did they originate?
You said you sent them out for feedback or solicited them.
Where did you gather those?
- The 25 plus task force members all sort of said, "Well, these are important questions."
You saw the screen where there were votes.
Something acquired an eight point versus a seven point or six or whatever.
That was how the task force examining the questions that others had submitted rated them.
And as I said, only the options that had eights and sevens were ultimately considered.
And the methodologist thought, "Ooh, you guys are asking too many questions."
Apparently, if you're doing drug trials, you don't need quite that many PICOs.
The process required us to, basically, well, we asked everybody to read the 2012, read the old guidelines and tell us what you think needs to be saved, what's material that wasn't in there that has changed and needs to be added and so on.
So there are a lot of conversations that were pretty open.
This was before we got to voting and all that.
We were also asked as experts to provide examples of literature that might not have come up in the search.
There weren't many because (chuckles) search was really exhaustive.
But there were some.
So you get an engineer who talks about particles that might be transmitting disease.
And they might not have found that in the type of search questions that they were asking.
So, yeah, it's been fascinating to go through.
And I am currently thinking, I want a copy, I want the reference list of those 212 references.
I wanted to share it with the ICU design committee, which is a different committee that runs a design awards program.
But I think they ought to be made aware of what we have in terms of the literature.
I think there's an article in there.
Anybody wanna write an article about 212 papers that are in the form of a systematic review?
There's room for something like that.
- [Attendee] So question about the guidelines.
It's supposed to be a global guideline that's gonna future impact all the ICU units across the globe.
How do you accommodate that in the 2024 version versus all with, like, almost a decade ago and the changes in technology is change in practice?
- That's a great question.
The question is basically how do we accommodate differences over time and differences within writing one guideline that is essentially global.
And the truth is the committee, or the task force, understands completely that these guidelines are going to be used by rural hospitals that are small and they're gonna be used by enormous teaching hospitals that are gigantic.
And the amount of resources that they have in different locations are quite different.
This is part of why you get a conditional recommendation instead of a strong recommendation.
We also recognize the differences across the world.
An ICU in Switzerland may be quite different from an ICU in South Africa or the Middle East.
There are conditions and cultural issues that influence the way care is given around the world.
The answer to your question is the task force discusses it with those things in mind.
So if we think we're proposing something that's totally unreasonable to the community hospital, but it's something that the teaching hospital absolutely wants to do and ought to do, we're not going to have it in the guideline that you must do what the teaching hospital should be doing.
So there's a certain self-regulation where the task force itself is always asking itself, "Is this reasonable for different countries?
Is this reasonable for different scales of hospitals and organizations with different resources?"
- [Attendee] Does the guideline set bare minimum requirements to (indistinct)?
So let's say that the safety field in architecture practice, basically, it's the minimum requirements, basically.
You don't really (indistinct) to that minimum, but you trying to hit the baseline.
And then you add some things to it.
Is that the approach here with recommended things versus the (indistinct) things?
- So the question is about whether the recommendations produce minimum standards or something else.
The 2012 edition, and I was one of the coauthors, was a first in the society's material in that it refused to do minimum standards.
Because in the experience of everybody on the, subject matter experts all said, "Well, if you list a minimum standard, that's all that ever gets built."
That the financial officer of the organization will squash the project to the minimum.
So for the first time in 2012, SCCM produced performance guidelines.
And the idea was this is what needs to be done around a patient bed.
And so if you're in a, a perfect example is if you're in a teaching hospital in the Houston Medical Center, you may need far more equipment and far more room to be around that bed than you would if you're at a community hospital in Amarillo.
And so the guidelines were written to say, "You must be able to do these things with the equipment that you have."
It doesn't say minimum of so many square feet.
Well, that was a first in 2012.
It was successful and it will be repeated in 2024.
We will have performance guidelines, not minimum standards.
Now, there are other people that do the performance guidelines.
So if you go to the FGI guidelines and look up ICUs, you'll see minimum standards.
But you won't get 'em in the SCCM guidelines.
That's a great question.
It's an important question for anybody producing guidelines of any type.
The natural experience is to stick with the minimum so that you don't overspend or so that you don't exceed.
And so minimum standards have turned into, without intending to, have turned into maximum standards.
I think the AIA has done a pretty good job of trying to suggest that people ought to be working with performance standards rather than minimum standards.
But it's hard to get away from that.
- Kirk?
- Yes, sir.
- [Ray] What did you take away from the process of the consensus experience going through the research lifting consensus from the findings?
What lessons are there in your methods for the practitioners out there that want to be able to use research and put that into their practice?
What methodological insights have you gleaned from going through things as thoroughly as you've had to do for this?
- Well, I'm not sure I have an answer for that, Ray.
The question is about lessons that I might have gleaned working through a process that's this rigorous.
I have been stunned by the level of investigation to produce the fundamental body of knowledge that's going to be used, gonna be interpreted.
If we had not had the methodologists and the librarians from McMaster University, all of whom are being paid, by the way, SCCM pays them to do this for task forces.
And they found it a little unusual to work with an environmental task force instead of a drug type task force or a procedure type task force.
But the rigor that they used to track down the information.
And then the process by which they, I mean, I showed you screen captures of how we walked through a whole series of questions that we were guided to go through.
My personal methods have always been a little more casual than that.
(chuckles) My interpretation of what is the relevance to my project of that particular research evidence has been just a kind of a personal, you know, "Okay, this is what I think it means."
Well, did I really ask about equity?
Did I really ask about availability of resources?
Did I really ask about acceptability to the different stakeholders?
Did I really ask about very careful thought of counting up the positive effects versus the negative effects?
I was never that rigorous.
I found myself finding the material and then saying, "This is really cool stuff.
This is really interesting."
And it changes my mind in terms of the way I interpret what it might mean for my project, but not nearly so rigorous in terms of a process that has been shown to be effective.
Using the computerized system, I mean, even the simplest stuff.
So librarians find 42,000 papers.
I've got a task force of 25 people who are now required to read 42,000 abstracts.
And, oh, by the way, you have to have at least two people read 'em.
So we're talking about 84,000 abstracts.
Going through all that and saying, "No, that's silly.
That really doesn't relate to design stuff at all."
It's easy to disregard it.
And then, "This one is absolutely relevant, let's keep it in," or, "Yeah, this could be."
And if you say this could be, they keep it until it can be reevaluated of what is it or is it not?
The idea that you get a consensus with a broader group, a larger number, and, therefore, the consensus has a higher possible level of validity, is different from what I think we see in most firms where the decisions about the relevance of particular evidence and how it's gonna impact an actual project may be kept between two or three people.
It's not going to be a consensus process with 25 subject matter experts.
Learning how to find the right combination of people to review, I mean, I had to, some of the people that are added to the task force were people that we knew and we invited them specifically because we knew their expertise was crucial to what we wanted to know.
In the practice environment, make sure you've got all the right expertise around the table to make those decisions.
The thing that I bemoan, I regret, I resent, is that practice has such poor access to relevant research.
So here we are at a university where any student can go to the library system and can search the world and bring in, if it's not here, if it's not available download, you tell 'em to go get it, and you'll eventually have it in your hands.
Why is that not possible for the designer at the drafting table in any firm anywhere in the world trying to design a hospital using the best possible information?
The profession has not given us, as practitioners, the same access to information, quality information that any graduate student has at almost any university anywhere in the world.
I think that's a crime.
I once proposed that the AIA should treat itself like a major library and pay whatever it takes to get access to any AIA member the same way I was getting here as a faculty member here at the university.
That was a really expensive proposition, it was not accepted.
They were briefly willing to buy the rights to certain amount of construction information, but they were certainly not looking at the global amount of information that's available to help people improve the quality of their design.
I think of that is true, not just about intensive care, that just happens to be my area.
I think you can have powerful evidence-based advances in museum art, in education, even in religious facilities, certainly.
You know, better fire stations.
It all has the potential to be improved through an evidence-based process.
- [Ray] Any further questions?
Kirk, it seems that we're just about out of time and out of questions at the same moment.
So let me say one more time, thank you for being here.
Always a pleasure.
One more time.
(audience applauds) - Thank you.

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
Texas A&M Architecture For Health is a local public television program presented by KAMU