ENVIRONMETRICS AUSTRALIA
Statistical Solutions to Environmental Problems

News & Updates

Climate Science on the Skids
May 30 2016 
CSIRO CEO, Dr. Larry Marshall makes another extraordinary claim.

New paper published
May 25, 2016 
Professors David Fox (Environmetrics Australia / University of Melbourne) and Wayne Landis (Western Washington University) respond to renewed calls to retain the NOEC in ecotoxicology.

Larry Marshall - Same dog, different leg action
April 11, 2016 
Although CSIRO has a new CEO in Larry Marshall, the 'innovation' rhetoric and restructuring is not. You'd think that with the passage of 10 years, the organisation (indeed any organisation) would be reaping the benifits of structural change. Regrettably, CSIRO never got off the slash and burn treadmill.

CSIRO suffers the bends after Deep Dive
April 08, 2016 
CSIRO in the spot-light again - for all the wrong reasons (again)

Hey Larry - it's not either / or
February 12, 2016 
CSIRO's boss, Larry Marshall takes the axe to climate research

Statisticians and (eco)Toxicologists Unite!
January 5, 2016 
As debates about the legitimacy of NOECs and NOELs continue unabated, we believe it's well and truly time to establish a sub-discipline of Statistical (eco)toxicology.

Revised ANZECC Guidelines officially released!
22 December, 2015 
It's been a long process, but the Revised ANZECC Water Quality Guidelines for Toxicants has been officially released.


22-12-2015 

SETAC Australasia - Nelson NZ
27 August, 2015 
"Toxicant guideline values for the protection of aquatic ecosystems -  an improved derivation method and overview of priority toxicants."

Rick van Dam, Graeme Batley, Michael Warne Jenny Stauber, David Fox, Chris Hickey,  John Chapman

Is data scientist sexiest job of the century?
19 April, 2015 
A few years ago, The Harvard Business Review hailed the burgeoning role

of data scientist  "The sexiest job of the 21st century" . With big
data technology driving the change, how does the new role stack up?




Social 'Science' - Science No More!
18 March, 2015 
This is not a bad dream - the journal Basic and Applied Social Psychology has banned the use of statistical inference!

CSIRO and the Gutting of Wisdom
21 December, 2014 
Read Bridie Smith's story about the impact of funding cuts to the CSIRO.

Hello 2015. Goodby Linkedin
January 2, 2015 
Have you stopped to think about the actual value YOU derive from having a Linkedin account?

Statistical Janitorial Services
December 31, 2014 
We've written about BIG data before and while some reckon it's sexy, you better roll up your sleeves because you'll invariably need to do a lot of 'janitorial' (a.k.a. shit) work first!

The problems of very small n
December 4, 2014 
Professors Murray Aitkin and David Fox are invited speakers at the Australian Applied Statistics Conference (AASC) 2014.

BIG data is watching you
November 6, 2014 
Ron Sandland recently wrote about the new phenomenon of 'big data' - weighing up the benefits and concerns. Terry Speed reflected on the same issue in a talk earlier this year in Gothenburg, Sweeden noting that this is nothing new to statisticians. So what's all the fuss about?
Here's another take on the 'big data' bandwagon.

New Method for Water Quality Guideline Calculations
Sep 15, 2014 
The ANZECC (2000) Guidelines are currently being reviewed.

The Explosive Growth of R
Sept 3, 2014 
Have no doubts - R reigns supreme!!

R - the Wikepedia of statistical software?
August 20, 2014 
The R computing environment is feature-rich, incredibly powerful, and best of all - free! But to what extent can we trust user-contributed packages?

Let there be light!
May 22, 2014 
New Industry Standard for managing seagrasses during dredging projects.

Statistical Accreditation
May 20, 2014 
Make sure you're dealing with someone who knows their stuff!

Job losses at CSIRO bigger than expected
May 15, 2014 
Confirmed in a message yesterday from CSIRO Chief Executive, Megan Clark:

Australian Science takes a hit
May 15, 2014 
Joe Hockey's budget has not been kind to science

Information-gap decision theory creates a gap in ecological applications and then fills it
May 14, 2014 
You may not of heard of Info Gap Decision Theory (IGDT) but don't worry, not many people have.

Probability Weighted Indicies for Improved Ecosystem Report Card Scoring
May 09, 2014 
A new way for calculating an environmental index is described in an upcoming paper "Probability Weighted Indices for improved ecosystem report card scoring" has been published in Environmetrics. Click here.


New Report on Ecosystem Report Cards
April 4, 2014 
'Report Cards' and their associated scoring techniques are widely used to convey a measure of overall ecosystem health to a wide audience. However, as with most things, developing, testing and validating these metrics is not straightforward.

Revision of Australian Water Quality Guidelines
March 27, 2014 
The long-awaited review of the ANZECC/ARMCANZ (2000) Water Quality Guidelines is now well under way!

Turbidity Monitoring clouded by dubious science
February 17, 2014 
Regulators and industry around the country are using a potentially flawed method to set environmental limits on water column turbidity.

Breaking down the team barrier
21 January, 2014 
New research suggests that team effectiveness may actually benefit from tension and hostility.

Mathematics of Planet Earth
May 30, 2013 
Local and international experts come together to discuss how mathematical and related scientific disciplines can be utilised to better understand the world around us.

Canadian Environmental Science and Regulation under threat
12 April 2013 
The Canadian Federal Government is making drastic reductions in the reach and capabilities of its environmental science departments.  Read Peter Wells's Marine Pollution Bulletin article.

High Impact
15 March 2013 
The peer-reviewed journal "Integrated Environmental Assessment and Management" (IEAM) lists Fox (2012) as one of its most accessed articles in 2012.

New Predictive Capability for Dredging projects
29 November 2012 
Environmetrics Australia has developed a unique water column turbidity and benthic light forecasting system.


Archive

R - the Wikepedia of statistical software?
August 20, 2014 

The R computing environment is feature-rich, incredibly powerful, and best of all - free! But to what extent can we trust user-contributed packages?


There was a time when I believed the 'R-learning curve' was too steep and the investment of time and effort not warranted by virtue of: (a) having software that did most if not all of what I needed to do; and (b) undertaking analyses which tended to be 'one-off' and not requiring code which could be recycled later on.

I no longer subscribe to that view and, having written over 5,000 lines of R-code and taught a number of courses in both elementary and advanced uses of R, I think I can say I'm 'up to speed' on R.

However, before my epiphany I used to vigorously defend my inefficient approach to statistical analysis arguing that at least by using commercially-produced software I had some level of assurance that the algorithms had undergone rigourous QA/QC and that there was some level of customer support if problems arose.

During this period, I recall on more than one occasion having 'robust' dinner / bar conversations about R's QA/QC processes with some of my statistical colleagues who, it is true to say, are R aficionados and who had seen the 'R-light' many years before me.   In response to my question "so how can you be sure that an R package gives the correct results", one of these colleagues wryly replied "you get what you paid for".  Touche!  Although I'm well down the R path and fully immersed in this computing paradigm, I have not been able to extinguish the nagging doubt about this fundamental quality assurance issue.

More recently I came across an R promotional video produced by Revolution Analytics (http://goo.gl/u5t5ph) which spoke to my unresolved issue with its reference to R's "crowd-sourced quality validation and support from the most recognised industry leaders in every field that uses R".

But is this sufficient? After all, given no money has changed hands, there are no consumer warranties and hence no commercial and/or legal imperatives to fix that which is not right and even that which is known not to be right.

As a recent example of this, I was using the R-package 'lpSolve' to find a solution to a large (strictly) integer programming problem. After 12+ hours of grinding away, the software did indeed identify a solution. Pleased with the outcome, I proceeded to use lpSolve's variable which is assigned the computed optimal value of the objective function (which should have been of class integer) as an array dimension. It was at this point my own R-code crashed. A quick check revealed that this variable was declared as double-precision and had a value equal to the correct integer solution plus delta, where delta was of the order 10^(-14). Clearly, no big deal and easily fixed, but what I found more intriguing was the following in the lpSolve documentation with respect to setting the  use.rw argument of the lp() function to TRUE (page 3): "This is just to defeat a bug somewhere. Although the default is FALSE, we recommend you set this to TRUE if you need num.bin.solns > 1, until the bug is found".  How long has that disclaimer been there I wondered, and what is this mysterious bug and when will it be fixed?  Given that I was looking for multiple optimal solutions, I did need to set num.bin.solns to something greater than unity. Re-running lpSolve with this option invoked worked, but in an unexpected way.

Conventionally, but not always, all the necessary output from some R routine can be accessed either by typing the name of the object to which the output has been assigned or alternatively typing summary(object). Not so the lpSolve package. Typing the name of the object simply gives: "Success: the objective function is ...". Applying the summary command to the object simply identifies the components of this object (which is an R list). Drilling down into this list, one can extract the solution vector. Doing this I was surprised to find the values of my decision-variables for multiple optimal solutions simply concatenated into one, long, unstructured vector whose length was equal to the number of decision variables * the number of solutions plus one! What's the extra component I wondered? Nothing in the documentation about that so I shot off an email to the maintainer of the lpSolve package. I received a prompt response which suggested that the solution vector length issue was a quick fix to an inconsistency between R and C and an admission that both the documentation and output needed improvement.


While I appreciate the honesty, it does go to show that with R, it's caveat emptor, or as my colleague, Andrew Robinson at the University of Melbourne prefers  caveat computator!


                                                                                           Prof. David Fox

Copyright (C) 2014 ENVIRONMETRICS AUSTRALIA