Texas A&M Architecture For Health
Evidence-Based Medicine
Season 2022 Episode 16 | 50m 26sVideo has Closed Captions
Dr. Kirk Hamilton presents Evidence-Based Medicine
Dr. Kirk Hamilton presents Evidence-Based Medicine
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Texas A&M Architecture For Health is a local public television program presented by KAMU
Texas A&M Architecture For Health
Evidence-Based Medicine
Season 2022 Episode 16 | 50m 26sVideo has Closed Captions
Dr. Kirk Hamilton presents Evidence-Based Medicine
Problems playing video? | Closed Captioning Feedback
How to Watch Texas A&M Architecture For Health
Texas A&M Architecture For Health is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship- Good afternoon and welcome to the Architecture for Health Lecture Series.
Dr. Pentecost couldn't join us today, so instead, I am going to be introducing our host.
Our host today is, our presenter today is Dr. Kirk Hamilton.
Dr. Hamilton has spent more than 30 years as a hospital architect before joining Texas A&M in 2004.
He was the Beale Endowed Professor of Health Facility Design in the College of Architecture and a faculty fellow of the Center for Health Systems and Design.
He is now Professor Emeritus.
One of only two architects elevated to Fellowship in the American College of Critical Care Medicine, he is an Emeritus Fellow of the American College of, I'm sorry, he is an Emeritus Fellow of the American College of Healthcare Architects and the American Institute of Architects.
His Bachelor of Architecture is from the University of Texas at Austin, his Master of Science in Organization Development is from Pepperdine University, and his PhD is in Nursing and Healthcare Innovation from Arizona State University.
He is a founding co-editor of the international, peer-reviewed "Health Environments Research & Design" journal.
He researches evidence-based critical care and the relationship of design to measurable outcomes, and we are very happy to have him here today.
Dr. Hamilton?
(audience applauding) - Thank you, Cynthia.
I'm pretty well-known as an advocate, a person who basically promotes the idea that one should be designing with evidence wherever possible.
The idea is pretty simple.
Everybody wants to have better design outcomes and it behooves us to make better design decisions in order to obtain that.
So better information leads to better decisions, and sometimes that better information is going to come from serious research.
So Roger Ulrich published a fantastic article in 1984.
It was the first time somebody connected clinical information, clinical outcomes, to design of the environment.
Most of us who were designing the environments in those days believed that as our instinct, but we didn't have any justification that supported us.
Well, Ulrich gave us that justification in 1984.
Along the way, for those of us working in the health realm, evidence-based medicine became an example of what it was that we believed.
So Sackett and his colleagues in Canada published, "Evidence-based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients."
A fantastic idea.
It's all about making decisions, it's treating each patient individually, and it's using the current best evidence.
The idea of evidence-based medicine is pretty easy to support.
We would certainly all like our physicians to be thinking that way.
Well, interestingly enough, 1996, when Sackett published this, this is the first time you can find evidence-based medicine in the literature, which is sort of shocking because in the 1880s we began to have scientific medicine, let's say.
You would think that the notion of basing medicine on evidence and on science would've been written about and proposed quite a bit earlier.
Interestingly enough, 1996 is of interesting time to us in architecture.
In 1969, at McMaster University in Canada, Eberhard Zeidler and his folks had designed a really pioneering university hospital project, large open span spaces in two directions, no columns, interstitial space above to allow access to people who could change the mechanical systems and so on without disrupting the space below.
And this is at exactly the same time that David Sackett was the Department of Clinical Epidemiology and Biostatistics.
He became the head of that department in 1967 and would go on in 1996 to publish his definition.
Well, I, frankly, stole shamelessly the definition from Sackett, and I published multiple times in the 2000s, "Evidence-based design is the conscientious, explicit, and judicious use of current best evidence," sort of stolen precisely from the Sackett definition, "best evidence from research and practice in making critical decisions, together with an informed client, about the design of each individual and unique project."
Well, you know, that's an homage, if you will, to the original evidence-based medicine definition.
And it's the same thing.
It's about making decisions.
It's about treating individual projects as unique.
So even if you're doing multiple McDonald's restaurants, they're all gonna be different because of how the electricity comes into the site, the site configuration, you know, the roadways, the curb cuts, and so on.
They're all individual projects.
And then using current best evidence is common to the whole idea.
So Ulrich really led us down this path with his 1984 study.
He also produced in the 1990 timeframe, six years later, he introduced a theory of supportive design.
Basically, he recognized that stress exacerbates or makes worse every known clinical condition.
And so he thinks for a while and he says, "Well, you know, maybe it's a smart idea for design to reduce stress."
And his theory included elements like social support, sense of control, positive distractions, each of which had variations and multiple elements.
Well, for many years an awful lot of people thought this one definition, this one theory, was all of what evidence-based design was about.
Well, of course that's not true.
Evidence-based design can be used for many more reasons than just this stress issue.
And we have always, as architects and designers and engineers, been using evidence.
We've used evidence from structural material, from mechanical ventilation, air movement, from climate.
We've been using evidence from real estate economics.
The new thing here is we're talking about using evidence from the social sciences and from the clinical sciences.
That's what's new here.
Well, if we take Ulrich's theory of supportive design, he went on to say that design to remove environmental stressors could include the stress of illness, where therapy partially reduces that stress.
The stress of the situation is partially reduced by the model of care.
And what he concluded was, was that the stress of the environment, the physical environment stress on the patient can be partially remediated by good design.
Well, this is an approval for architects having a role.
Good design has a role in terms of helping patients get better and so on.
Well, some time not long after this, about the time I was about to change to becoming an academic out of my professional practice, I was asked to write an article about what is an evidence-based project.
I found I couldn't do it.
So instead I wrote an article about evidence-based practice and that led to this notion of multiple levels.
Level One, unfortunately, is where I spent the majority of my career.
I didn't think about this until about 2003, and by 2004, I became an academic.
So Level One believes that this is a good idea, tries to stay up with the literature, makes interpretations of the literature to include in design, and then goes around and collects the success stories.
That was what I was doing for most of my career.
And the types of things we were doing back in the '60s when I was just getting out of school, is we were looking at people like Edward T. White.
We were looking at a variety of things like proxemics and spatial proximity and so on as slightly beyond what we were doing with the sort of engineering concepts.
Well, as a result of me being in a master's program, I had now started to think in new ways.
And at this point I hypothesized Level Two, that you do everything Level One says, but in this case, you're gonna hypothesize what the outcome will be that matches your design intervention.
You're gonna figure out how to measure those outcomes and then you commit to measuring the results.
And I personally came to an opinion that Level Two is really, for me anyway, the minimum level at which you can consider yourself to be an evidence-based practitioner.
You have to put design concepts into your projects with an intention for them to deliver a particular outcome and you must be committed to measuring those results.
Well, Level Three basically says you do everything of the first two levels, but in addition to that, you're gonna share the results publicly.
You're gonna use independent post-occupancy reviews so that you're not evaluating your own work.
You know, you're gonna speak at conferences, write articles, do a variety of things to publicize what's going on.
And then Level Four is this sort of, it's not exactly an afterthought, but it basically says at some point some of the things that you discover, some of the things you're gonna work with, deserve to be treated with peer review and academic level of rigor.
And the bottom bullet there, pursuing a graduate education, I realized that I wasn't as well educated as I needed to be and so I went back to get my doctorate.
Well, I don't expect you to read this chart, but what's important here is to note that it's cumulative.
So everything that you did in Level One becomes part of Level Two, Three, and Four, and so on, until at Level Four you're doing everything that came from any one of them.
Now, this was very confusing.
When this was published in 2003, people in the profession began to be really nervous.
It's like, "Oh, my goodness.
I've got to take my practice up to Level Four.
How am I gonna do that?"
And the reality is, not everything needs to be at Level Four.
The vast majority of projects people do are gonna be at Level One.
You sort of move a door from over here to over there.
You change an arrangement of some kind.
Maybe you even change the function of a room.
But you don't need to publish that in a peer-reviewed journal, right?
Most major projects really do deserve hypothesis and measurement, and sometimes those will deserve to be presented publicly in magazines and conference presentations and the like.
And it's really only a relatively small number of your projects where you've discovered something significant enough that you need to publish it in a peer-reviewed scholarly publication.
So here's where the apology comes.
I've been a proponent of this pretty aggressive, really, in suggesting that the profession was not up to the standard that it needed to be, that we were behind other professions in terms of the rigor with which we applied ourselves to our projects.
in Gothenburg, Sweden, at Chalmers University.
Roger Ulrich and I were presenting an evidence-based design workshop for Swedish architects.
And at some point, Roger pointed out that this project you see on the screen, the Sahlgrenska Ostra Acute Psychiatric Ward, a psychiatric facility, was really well-regarded.
It was receiving publicity.
And Roger had studied this and he determined that the architect had used eight of the nine known evidence-based concepts in the design.
Well, the architect who had designed it was in the room, and he was skeptical.
He became a great friend.
He basically said, "Well, I didn't know about the evidence-based concepts.
I did everything I did on the basis of intuition."
And of course he had done a brilliant job and had produced something.
So we began to have a dialogue with Stefan, Stefan Lundin, whose picture you see here in the lower corner.
I was invited back to Gothenburg to be an examiner for his Licentiate defense.
In their country, Licentiate defense is part way toward a PhD.
He had actually written a book, "Architecture as Medicine," with a psychologist, and I challenged him to see the value of decisions that could be improved by evidence in spite of his skepticism, and he challenged me to consider that intuition is based on tacit knowledge.
So this was part of the diagram that he had in his Licentiate materials.
It had intuition as opposed to evidence at the top.
Evidence was shared, explicit knowledge.
At the bottom it was personal, tacit knowledge.
And so evidence may be just one step above best practice, and ordinary practice may be just a step below best practice, and then intuition is personal, unconscious, and unreflective knowledge, but it is knowledge.
And what Stefan had gotten me to see was that what I had always thought was wrong about intuition, and so I added this fifth block.
I thought the word intuition caused me to believe it was an arbitrary choice.
It was the absence of knowledge, it was subjective, it was random, it was uninformed.
And Stefan got me to recognize, no, intuition is really based on you're having knowledge.
You just can't necessarily pull it out and verbalize it at the time.
That was a big deal for me.
It allowed me to now think in terms of not just the evidence side of this equation, but to think of it from the intuitive side as well.
So tacit and unconscious knowledge is there.
Intuition is based on experience.
Tacit knowledge is difficult to write down.
You can't really verbalize it.
And unconscious knowledge is something that you don't even recognize, but what you do shows that you know something.
And implicit knowledge is simply implied without being stated.
Well, we all know the parable of the blind men and the elephant, all seeing very different sorts of things.
And in our world, we have clinical outcome evidence, but on the other hand, we have had, for generations, fire and safety regulations, building materials, infection control, any number of things as simple as waterproofing a building, all come from some form of evidence.
And yet, what it sounds like when you talk too strongly about evidence is that you must be denying creativity.
Well, creativity, making a few important decisions on the basis of research findings, does not mean you're ignoring creativity, okay?
Creativity, and perhaps intuition, is required to interpret the implications of research.
How are you going to use the knowledge that you gain through that?
'Cause after all, no research has ever been done on a building that you haven't built yet.
So the research always has to be interpreted for its value to your project.
Well, aesthetic creativity is still cherished.
The vast majority of decisions on a complex project are gonna be made on the basis of industry best practice.
So when research provides a needed answer, it's used in a successful design, it can be widely adopted, and eventually it becomes best practice.
The same is true for creativity.
When creativity develops a successful design that's not necessarily based on evidence and may come from intuition, it also can be widely adopted, and once it's known to be successful and has been adopted, it too becomes best practice.
So good ideas become best practice for the field.
And then this diagram, basically, was one that allowed me to think in a whole new way.
So if you think of the bell curve, where the vast majority of decisions are made in that middle part of the curve and down at the tails, where the lowest percentages of decisions are being made, a few on the left are being made on a basis of evidence and a few on the right are being made on the basis of intuition.
But ultimately, as they are successful, either way, as they become adopted by others, they cease to be something unique and new.
They become part of accepted best practice by the profession at wide.
Well, so this caused me to sort of rethink, 'cause I had sure been thinking that anybody doing intuitive design was somehow avoiding the idea of using best evidence.
I learned that's not true.
We have the opportunity to come from both directions to the best practice, which is the vast majority of all decisions made all across the profession, all across any industry you may be in.
So the evidence-based practice is a small piece and the intuitive practice is a small piece, but best practice dominates.
Well, what do we do if we have information?
I've come to believe there's a moral responsibility to base the design decisions on some evidence.
Having the knowledge gives you a moral obligation.
So we know that traffic safety engineers are aware of the fact that if they place a curb cut from a parking lot too close to the intersection, there'll be more accidents.
Therefore, we expect the traffic safety engineers to design in such a way that they reduce the number of accidents that occur.
Same is true for aeronautical engineers.
You absolutely want to be sure that the aircraft designer used the best information possible to design the plane that you might be riding in.
And interestingly, isn't there a moral obligation for the physician using clinical best practice as they treat you?
Now, this one is scary, because in my world, we are aware that it takes 17 years for the absolutely current information coming out in the medical journals to be available to all physicians across the board.
That's scary, when you think they're not aware of what's really going on.
So what we all hope is that our personal physician is current on everything that relates to us.
They may not be current on illnesses or other things that are obscure and that don't relate to our particular case.
So responsibility is kind of interesting.
If architects are responsible for these designs, and if evidence indicates that design can improve clinical outcomes in patient safety, then surely healthcare architects have a responsibility to utilize that evidence.
And similarly, on the client's side, if credible evidence indicates design can improve outcomes and safety, and if healthcare executives are responsible for construction projects and budgets, then it would seem that healthcare executives have a responsibility to select and encourage qualified architects to utilize that kind of evidence.
So I found that, even in my own firm, where people were totally supportive of everything I was talking about, everything I was promoting in the world, they didn't really know how to actually do it.
So I actually had to produce a process to allow them to understand what it took to be an evidence-based practitioner.
And if you can't read this, basically the first three, identifying the client's goals and the firm's goals and identifying the top one or two key design issues, these are things that people who've been trained in design, anywhere in the world, they all know how to do this.
The idea of identifying the client's goals and the project's important issues is something that every one of us has been taught.
It's the next three that make the difference to the process of an evidence-based project.
Number four is convert those design issues to research questions.
So it's a matter of reframing the statement of a design issue into a way in which one can go to the literature and find out what's associated with that.
A design issue is not necessarily stated in a simple way that allows you to immediately go discover it.
So an example might be, the client says that a key issue is to reduce drug mix errors for cancer patients.
Well, you know, you have to just simply change that around, and the questions become things like, you know, what are the most prevalent drugs mix errors?
What are the environmental factors that are associated with drug mix errors?
And you go through a series of questions, which now you can go to the literature and try to track something down.
So number five, you're gonna gather information from a variety of sources, and the trick here is, is that if you think you know the answer, then you've really got to broaden your perspective and look in new ways.
If, on the other hand, you know nothing, you're taking the infinite and you're trying to narrow it down to what's most valuable and most important to you.
And then finally, number six, you're gonna have to interpret that evidence.
So as I said before, nobody ever studied a building you haven't built yet.
So there are no direct answers.
It requires open-minded creativity.
It requires critical thinking to take what you found in the literature, what you found from research, the best evidence that you can get your hands on, and interpret what it means to your project.
So those three steps are the things that really make something an evidence-based project.
Beyond that, we get immediately into, well, we're gonna use some design concepts.
Well, everybody who's been to a design school anywhere in the world knows how to produce concepts, and so you use the implications of your research study to maybe alter design concepts.
And then, if you're involved in this notion of an evidence-based process, then you want to develop some hypotheses.
Now, these are design hypotheses.
You want to predict the expected results from the implementation of your design.
And again, as I said earlier, I think that's my minimum level of understanding of whether a person is actually practicing this way.
Now, my staff told me, "Well, wait a minute.
Sometimes we develop the hypothesis first and then we create the concept that will implement it."
So you see the sort of circular things between seven and eight.
And then, by the time you're down to nine, you're now gonna select the actual measure that will allow you to tell whether you're hypothesis, your design hypothesis, was supported or not, and "supported or not" is important language.
For those of us in architecture, we never really (indistinct) precise about that sort of thing.
It's all too easy to say, "Oh, well there's a study.
It's proof that this is the case."
Well, anybody who's a scientist or a researcher, including your clients if you're in a medical facility, they'll all roll their eyes if you claim that something has been proven by one study or one analysis.
So please try to learn, never claim proof.
All you're really trying to say is the hypothesis was supported or it was not.
So that's how you use evidence that other people have gathered.
Well, sometimes you need to do a study yourself and see if you can gather information that would be useful to your client or to your project.
So the question might be, all right, what is it that we study?
Well, if you use this sort of typical two-by-two grid, on the bottom, you see that it's low on the left and high on the right.
That's the impact of the decision that you're trying to make.
And then on the vertical axis you can see low is at the bottom and high is at the top, but this time it's talking about the degree of uncertainty.
What do you not know?
So it's lack of evidence.
The lack of evidence is either low because you have good information, or it's high, okay?
So if this is a legitimate way to parse what's going on, if you look in that upper left, you don't really know very much but it's not very important.
So this one is an area where you can be subjective and creative.
In the lower left, you have good information this time, but it's still not very important.
It's objective and can be simple.
In the lower right, well, now it's an important decision, right?
It's high impact, but you have good information, right, where you have a low degree of uncertainty.
So good information means that your client really wants you to get this right.
You'd better be correct when the information is good and the decision is important.
This is a demanding answer.
Well, now, the place where you should be doing your work as a researcher, the place for discovery and original critical thinking is in that upper right where little is known but the decision is important.
So what all this means is, is that the whole left side of the diagram is the area of maximum design freedom because it's low impact for the decision, and of course, across the top, where little is known, the domain of the highest creativity required for you to be able to come up with new material.
So in an awful lot of ways what I found myself arguing for in the press, in peer-reviewed and magazine-type material is, and even in guest presentations at conferences, I was basically arguing that our profession needed to have a higher level of rigor than it was accustomed to.
So what is it that was meant by rigor?
Well, I think it means that you have a serious search for evidence that is relevant, that is the best available, that is current, that it focuses on the issues that you are concerned with.
And then eventually you need to take a chain of logic from the evidence itself, from what you found through your critical thinking and interpretation of what it meant to an actual design concept that you're gonna put into a real project.
And once you have such a concept that is based on a chain of logic back to the original evidence, well, I'd like to hope that you would hypothesize what that's gonna mean.
So a design hypothesis is as simple as saying, "Well, we're gonna put the reception desk directly in front of the outside door because we want people to immediately see it and utilize it better than in the earlier diagram where we had it off to the left, like you would have at a hotel lobby for the reception desk.
We're gonna put it right in the line of..." And then you could follow that up by observing how people used it and discover whether or not you were correct.
Maybe, on the other hand, you decided to put the reception desk off-center so that it was closer to where the elevators were gonna be so that more people would have a shorter path to where they would ultimately go.
But you need to be able to describe, "Because I've designed this concept and put it into the project, I expect this result," and the result should be something that you could measure.
So measuring it after the fact, the way I used to do when I was just a sort of Level One practitioner, isn't quite right, right?
You're just sort of stumbling over the positive things.
What you need to do is state the design intent in advance, before you have the chance to measure it, to make sure that you designed it, you intended for the outcome, and you either got it or you didn't, right?
It was either supported or it was not.
So you make a commitment to measure and you use measures that are appropriate to what you're trying to measure.
You try to be precise, accurate.
The measurement should have validity.
And then, if you find out something interesting, if you find out something new and valuable, then you make sure to have a commitment to share the knowledge gained.
So I think that's it in terms of the slides.
I think we can cut the slides off at this point, unless we need to go back to them for questions.
But what I would like to say is that I felt a sort of passion as a young architect working in the healthcare field.
I hadn't really intended to get in healthcare, but once I was in it, I realized I really needed to do the best I could possibly do because people's lives and, you know, their health was at stake.
And so with the stakes that high, I tried to learn how to do the best I could.
So I'm not ashamed of anything that I did while I was a Level One practitioner, but it hadn't occurred to me that all of this additional rigor could be put in place.
So if you think about it, in 1960, the late '60s, early '70s, people like Robert Sommer, people like Edward T. Hall, people like, a variety of people were giving us information from the social sciences that came above and beyond what we were learning about sun angles and heat penetration, the building material usage and so on that has been so normal for us as practitioners.
We were learning to use other information, but it wasn't really connected to the clinical world until 1984 when Ulrich did his extraordinary study, and it took me from 1984 till 2004, when I joined Texas A&M.
I practiced for another 20 years before I had really escaped the Level One mentality and gotten myself up to Level Two.
By the time I was really a convinced Level Two practitioner, I actually left practice and came to the academic world.
So I spent my time, you know, shouting in the wind and arguing that the profession was not doing what it should, when I, in fact, myself had not done all that I now feel that I could.
Had I understood evidence-based practice the way I understand it now, I'm convinced that I would've been a better practitioner during all the years that I did practice.
I owe an enormous amount to people like Roger Ulrich, who eventually became my mentor here at Texas A&M before he moved on to Sweden.
He and I served on organization boards together for many years.
There were a number of people who influenced my laser-like focus on this notion that evidence was what was needed and we were not doing it right.
And so imagine my surprise when Stefan Lundin tells me, "Eh, who needs evidence?
You know, I did everything really well and it was all based on intuition."
And that was when I finally understood you have to have a balance, you have to have a balance of those two things.
And that it's not the groundbreaking new material that comes out of an evidence-based process or the groundbreaking new creative ideas that come out of intuition.
That's not where the real world is doing its majority of its work.
It's in the big bowl of best practice, and best practice is fueled by evidence-based on the one side and fueled by intuition and creativity on the other side.
And it only becomes best practice if it survives, if it has been challenged, if it shows that it works, if it is consistently working in a number of settings.
That's how something becomes best practice.
It's not the evidence-based process that I so cherished and fought so hard to understand myself and try to explain to others.
It really is, how can we get to best practice?
How can we find our way to the world of best practice in the most effective way?
And I think, no matter which side of that equation you're coming from, it requires critical thinking, it requires creativity, it requires hard work, to get you to either of those sort of starting points to generate something that will ultimately become best practice and be shared by the profession or the whole world.
So I'm gonna stop there.
That's my way of apology for many years of focusing so precisely on evidence-based practice that I may have overlooked some things about the creative side of what we do, but there's no question that architecture is, always has been, a mix of art and science.
It's one of the beautiful things about our practice.
And this is my apology to sort of say, okay, so I pushed the science side awfully hard for a long time, but I want you to know, I recognize that the art side has its legitimate place in the partnership of how we together get to best practice.
So are there any questions?
Hou.
Go ahead.
I think he can probably turn on that microphone maybe for?
(production team chattering) - Oh.
I'm Hou, a doctoral student.
Yes, thank you so much for sharing your experience.
I can see your passion and dedication to evidence-based design, and one slide you show is fascinating to me.
The curve shows that on one end, evidence-based, the other end, intuition, and the best practice dominate the profession.
And my question is, do you see a trend there are more and more evidence-based design projects, and what do you think is the future of this curve?
Will it change in the near future or not?
- Thank you, Hou.
That's a good question.
What's interesting is, a lot of the resistance to evidence-based practice at the beginning came from people who, like me, had had no education in how to understand research, no education in how to read a scholarly paper, and who were made nervous by the idea that the way they had practiced for so many years quite successfully was suddenly being challenged.
And so there was some resistance at the outset.
Today, I think, evidence-based practice is in the mainstream.
It's not an outlier.
It's now considered to be normal.
And yes, we are having more and more projects that can explain the evidence that they used to arrive at decisions.
I'm not sure that anything will really change.
A bell curve is essentially a standard profile.
And so the percentages of decisions that are made on either of those tails is very small, and the number of decisions that are made in best practice will continue to be the majority of decisions forever.
Are we gonna see evidence-based overcome and squash intuition?
I don't think so.
Architects have forever been creative and intuitive and artistic and they will continue to be.
So I think both sides of that bell curve are healthy and will continue to contribute to the middle, which, as it should be, the middle should be the way in which most people discover a good answer to their questions on any specific project because all projects are different in some way.
So yes, I think the evidence-based side of that equation is now more better understood and is more common and more normal than it used to be, but I don't think anything is gonna take away from the intuition side of the curve.
Anything else?
- [Hou] Thank you.
- Thank you, Hou.
Yes?
- Thank you for sharing your experience, Dr. Hamilton.
My question is, what are your recommendation if we face lack of information, especially, like, in some countries.
For example, if we want to collect in the process, we have to collect the project information, sometimes like, if we want, like, climate information or something related to a project in some country.
What are your recommendation for this?
- That's a great question.
It leads to an answer that should be on everybody's mind all the time, and that is, you never have all the information that you wish you had on any project ever.
You always have some gap, some lack of knowledge.
So I think, you know, your question is well put.
There are going to be situations where you're doing work in strange places or whatever.
I did work in Tatarstan.
I did work in the Caribbean on islands.
And the information available in places like that was always less than what we had.
So sometimes you extrapolate from what you do know, that if this is true over here, maybe it's almost true over here.
(chuckles) And so you can do the best you can with what you can get your hands on.
But the idea of identifying, what do we need to know?
What decisions are the ones that are most important for this project?
What do I need to know to make the best decision about that key issue?
And if you are not thinking closely about what I've said, you may recognize I said only one or two key decisions are the ones where you go out and you try to really go to the literature and seek new material that you haven't had available to you.
Because if you try to make five or six or seven decisions like that, where you're investing a huge amount of time in the libraries and so on, it's just not practical for the real world.
So if you can do a serious job of researching and investigating one or two important issues for your project, you should feel fortunate if you get good information, but you should always be aware it's never gonna be everything that you really wanted.
I can say that I practiced for 30 years and I never had a project where I had all the information that would've made it perfect.
Just doesn't happen.
- [Audience Member] Thank you.
- Other questions?
I think we still have a little time left here.
- Thank you, Professor.
I have a question related to one of your points over there, and it's regarding, has there ever been a difference between, like, regarding defining in different regions?
Like you went to Sweden, and did you find anything different from here and there?
- Sure, it's a good question.
So I mentioned the project in Tatarstan, it's in the former Soviet Union, and they had intentions to build a truly modern hospital, but their infrastructure wouldn't support it.
They didn't have the ability to heat and cool and do some other things to condition the systems for some of the equipment that was required.
So yes, we had to deal with it in a different way.
I had a, the project that I mentioned in the Caribbean on the island of Antigua, for the Ministry of Health, is basically the only hospital in the country.
Antigua and Barbuda is the country.
And there everything was naturally ventilated.
None of the patient spaces were air-conditioned.
Everything was the cool Caribbean breezes and all of that.
The only part of that building that was air-conditioned was the technical part.
Surgery and radiology could not be done with modern equipment unless it had dehumidification and other sorts of temperature controls and so on.
So it was a total reversal.
We were air conditioning the equipment, not the people.
Another example is, we gathered all the information, all the evidence that we could possibly gather, for the Hermann Hospital System in Houston, and they wanted to do an obstetrical project in The Woodlands.
And so we brought in world-class experts and we gathered all the data you could possibly have and we designed for them a very successful maternity unit in The Woodlands.
And they then wanted to do a maternity unit in the southwest part of Houston, and we had just gathered all this content and all this information.
Can we just use the same thing?
And the answer was, no.
The local culture in the southwest could not accept the type of caregiving that we had designed for The Woodlands.
There's a totally different population and we used the same information to produce two different designs because the population being served was different.
So yes, one of the early complaints about evidence-based design, some of the practitioners that were resisting said, "Oh, that's just cookbook architecture.
There's no creativity left.
You know, you're just gonna open the drawer and pull out the design that has already been determined, and, you know, everything is done."
That's not true.
It couldn't be further from the truth.
You really need to use serious thinking, critical thinking.
You need to use very creative analysis and interpretation in order to get the best results, and the results, using the same information, can be quite different in different places.
The other thing that's true is, you may have gathered all the information for this Project A and then Project B comes along three years later.
Well, guess what?
There's new evidence.
The evidence is constantly changing.
The evidence is constantly being updated and added to.
So there's always something new.
And that means the architect, the designer, must be thoughtful about what's going on and must be constantly paying attention to what material they need to recognize.
So I think we're down to our last minute or so.
So I'll just close by suggesting that while I'm in front of an audience of students, the truth is, my presentation today was really aimed more at the practitioners in practice.
Most of you students have heard me speak before and you've heard an awful lot of this, but I think the practice has known less about my recognition that there is just as much intuition as there is evidence-based.
So Cynthia, do you need to close or is that it?
- All right, we want to thank Dr. Hamilton for joining us today (audience applauding) and invite all of you to join us next week where we will have Carlos Moreno of Perkins&Will, I'm sorry, Carlos Moreno of Perkins Eastman, and we're excited to hear him speak.
Have a great day.
(upbeat music)
- News and Public Affairs
Top journalists deliver compelling original analysis of the hour's headlines.
- News and Public Affairs
FRONTLINE is investigative journalism that questions, explains and changes our world.
Support for PBS provided by:
Texas A&M Architecture For Health is a local public television program presented by KAMU