Current efforts to treat HIV patients with anti-retroviral drugs often involve significant “trial and error”. Many different drugs and doses are tested before finding the most effective combination for a particular patient. These efforts are further complicated by patients who exhibit drug resistance.
These problems could be reduced by the development of a computer model by scientists at John Hopkins and Harvard universities. The model predicts how successful a given treatment will be for a patient. By modelling how drugs penetrate different parts of the body, including the blood stream, stomach, and brain, a prediction can be made about virus loads and whether the HIV virus will grow or shrink. The model can even take into account a patient’s individual behaviour (such as how accurately they follow their prescription – which can seriously alter a drug’s effectiveness). The model was created using data on 20 anti-HIV drugs from thousands of virus level tests. (Full article)
Computer models are often used by fire behaviour analysts (FBANs) to predict where wildfires might spread, and how quickly. This is in turn helps FBANs tackle fires in the most effective way and even develop new, more efficient fire-fighting techniques.
Recently however, fire fighters in the US have been finding their computer models need more and more tweaking to accurately predict the path of wildfires. There are several reasons the model’s outputs do not match reality: one is that the success of fire-fighting efforts in recent years mean there is no more forest to burn – computer models previously assumed that forests would be a patchwork of burnt and unburnt areas, which act as natural firebreaks. This is a good example of how assumptions can affect the accuracy of a computer model.
Another reason is the unusual weather experienced in recent years, which has left forests dry and more vulnerable to fire. These factors mean that when wildfires do occur, they are much larger than any predicted by the computer model – so large, in fact, that they generate their own miniature weather systems – something which the computer model does not take into account.
The full article from The Atlantic is a very clear explanation of the many factors that affect the accuracy of these fire prediction models. (Full article)
Climate models are the canonical example of computer models, so it would be hard not to include them in this post. However, this ARSTechnica article takes a slightly different approach, explaining why the climate model results in the latest Intergovernmental Panel on Climate Change (IPCC) report are not much of an improvement over the previous generation of climate models. As computing power has increased, researchers have been able to include many more variables and processes in the models, as well as removing older assumptions. In the third assessment report (AR3) for example, the computer models assumed that solar energy was a constant – this has been changed in the latest AR5 model. Similarly, the latest models now include direct and indirect effects of atmospheric aerosols, and calculate their answers to a much higher resolution.
Despite these improvements, researchers believe there are still many additional variables that are missing from the latest climate models (perhaps because we don’t understand them), and bizarrely, the improvements to the models mean we have less actual data with which we can compare the model’s results to test their accuracy. (Full article)