DLMtool: Data-Limited Methods Toolkit (v3.11)

Tom Carruthers (t.carruthers@fisheries.ubc.ca) and Adrian Hordyk (a.hordyk@murdoch.edu.au)

2016-06-08

1 Introduction

As many as 90 per cent of the world’s fish populations have insufficient data to conduct a conventional stock assessment (Costello et al. 2012). Although a wide range of data-limited management procedures (MPs; stock assessments, harvest control rules) have been described in the primary and gray literature (Newman et al. 2014), these are not readily available, easily tested or compared. Critically, the path forward is unclear. How do these MPs perform comparatively? What are the performance trade-offs? What MPs are inappropriate for given stock/fishery/data quality? What is the value of collecting additional data? What is an appropriate stop-gap management approach?

DLMtool is a collaboration between the University of British Columbia and the Natural Resources Defense Council aimed at addressing these questions by offering a powerful, transparent approach to selecting and applying various data-limited MPs. DLMtool uses Management Strategy Evaluation (MSE, closed-loop simulation) and parallel computing to make powerful diagnostics accessible. A streamlined command structure and operating model builder allow for rapid simulation testing and graphing of results. The package is relatively easy to use for those inexperienced in R, however complete access and control is available to more experienced users.

While DLMtool includes over 55 MPs (e.g. DCAC, DBSRA), it is also designed to be extensible in order to encourage the development and testing of new MPs for informing management of data-limited fish stocks. The package is structured such that the same MP functions that are tested by MSE can be applied to provide management recommendations from real data. Easy incorporation of real data is central advantage of the software and a set of related functions automatically detect what MP can be applied given the available data and what additional data are required to get other MPs working.

DLMtool has been used in setting catch-limits at the Mid-Atlantic Fishery Management Council (US) and is being used to test management procedures in California state fisheries (California Deparment of Fish and Wildlife), the Caribbean (NOAA), and for seafood certification purposes (MSC).

2 Version Notes

The package is subject to ongoing testing. Once again, if you find a bug or a problem please send a report to t.carruthers@fisheries.ubc.ca or a.hordyk@murdoch.edu.au so that it can be fixed!

2.1 New Additions to this Version (3.11)

A number of small but important bugs have been fixed, with special thanks to Liz Brooks, Helena Geromont, and Bill Harford, for alerting us to some of these issues.

Quang Huynh has recoded the mean length methods in C++, and they now run much faster, and should pass the timelimit constraint.

A new function (runMSErobust) has been added which is a wrapper for the runMSE function. In time this may replace runMSE as the primary function to use when running a MSE. runMSErobust splits large simulations into a series of smaller packets and stitches them together to return a MSE object. This has the benefit of increasing speed and efficiency, particularly for runs with large number of simulations. The function also checks for errors and re-starts the MSE if the model crash.

A set of functions OM_xl and Fease_xl have been added. These are used to read in operating model and feasibility parameters from a Excel spreadsheet rather than CSVs. These are essentially wrappers for the new function, but allow you store all operating model tables in a single spreadsheet rather than a whole lot of CSVs. Mainly useful if you are working on multiple species/stocks.

The size limit feature has been updated to include an upper slot limit. See slotlim for an example MP. The slot limit is specified as the last element in the input control vector. Similar to the lower size limit, all individuals above the slot limit experience no fishing mortality.

A number of new MPs have been added. There are now 63 output and 22 input control MPs in the DLMtool.

A new function makePerf has been added. This function takes an OM object, and returns the same OM object with no process or observation error. Useful for testing the performance of methods under perfect conditions, to see if work as expected. And for debugging!

Two new plotting functions: wormplot which creates worm plots of the likelihood of meeting biomass targets in future years and VOIplot which is another value of information plot, similar to the VOI function, and shows how observation and operating model parameter values affect trends in long-term yield and biomass.

Coming soon: bag limit MPs for recreational fisheries

2.2 Notes from version 3.1

Variable historical selectivity patterns. In order to simulate fisheries that have experienced important shifts in historical length selectivity, this can now be user specified using a graphical user interface (the ‘ChooseSelect’ function) or by manually editing a series of new slots in the Fleet object (SelYears, AbsSelYears, L5Upper, L5Lower, LFSUpper, LFSLower, VmaxUpper, VmaxLower).

Cyclic Recruitment patterns. Persistent shifts in stock productivity are a particular concern for fishery management. These can now be generated in the toolkit using a new function SetRecruitCycle that generates cyclical pattern in recruitment strength.

Length-based spawning potential ratio (SPR) MPs.

Two features have been added to allow MPs to return additional information for future reference and use that information in the future. (1) The DLM_data object that MPs operate on now has a miscellaneous slot Misc. (2) MPs can now return a list. The first position is the management recommendation (e.g. TAC) the second is information that is stored in the Misc slot that can be used by the MP in the next iteration.

A new generic trade-off performance plot TradePlot.

2.3 A note on version 2.11

Operating model effort is now simulated by a time-series of year verticies and relative magnitude of effort at each vertex. It follows that the slot Fleet@Fgrad is no longer, and has been replaced by three slots with vectors of equal length: Fleet@EffYear, Fleet@EffUpper and Fleet@EffLower.

These effort trajectories can now be specified by a new graphical interface (function ChooseEffort()) which uses points to determine the three slots described above.

Operating model fleet selectivity has been robustified to prevent users from specifying length at first capture (Fleet@L5) and length at full selection (Fleet@LFS) that are unrealistically high. According to our view of reality these now have upper limits of L50 and maximum length, respectively.

A function DOM() has been added that evaluates how often one MP outperforms another across simulations. It is possible that an MP could have higher average performance but perform worse on higher fraction of simulations. The DOM() function provides a diagnostic.

An additional function Sub() has been added which allows users to subset an MSE object according to either (or both) a vector of MPs and simulations. This means you no longer have to rerun everything to provide results for a smaller number of MPs or particular simulations.

2.4 A note on bug fixes in 2.1.1 and 2.1.2

A bug was found in which length at first capture was being sampled from a uniform distribution U(LB,UB*2) rather than U(LB,UB). When depletion could not be simulated by even very high fishery catchabilities an error could occur after more than 10 attempts to find a suitable value of depletion. Length composition simulation in 2.1.1 was not correctly implemented leading to minor biases.

2.5 A note on version 2.1

The package has overgone a substantial overhaul. In response to popular demand, simulation and data are entirely length-based now. It follows that many objects that worked with 2.0 will no longer be compatible. In most cases it is very quick to make files/objects compatible with version 2.1 but nonetheless we apologise if this is frustrating!

Fundamentally the package is stochastic so if you run into problems with the code, please report it (along with a random seed) and in the mean time simply try running it again: the problem may be attributable to a rare combination of sampled parameters.

Be warned that if you abort a parallel process (e.g. runMSE()) half-way through you are in the lap of the Gods! It will often be necessary to restart the cluster sfInit() or even restart R.

Its probably best not to try and use the package for very short lived stocks (that live for less than 5 years) due to the problems with approximating fine-scale temporal dynamics with an annual model. Technically you could just divide all your parameters by a subyear resolution but the TAC would be set by sub year and the data would also be available at this fine-scale which is highly unlikely in a data-limited setting.

2.6 New to version 2.1

  1. The whole toolkit has moved to a length-based simulator (maturity, fisheries selectivity by length)

  2. We’ve dropped spatial targetting for the moment as it was a flawed implementation and could not distribute fishing correctly with respect to both density and amount of resource among the two areas.

  3. Tplot2 adds a different set of tradeoffs including long term and short term probability of achieving 50 per cent of FMSY yield and average annual variability in yields

  4. Version 2.0 was bugged and did not include observation error in estimates of current stock abundance and depletion (only biases were simulated). Many thanks to Helena Geromont for spotting this. This has now been corrected

  5. DLM_data objects now have a slot LHYear which is a numeric value corresponding with the last historical year. This is needed for some MPs that want to run off only the past data rather than the updated (projected, closed-loop simulation) data.

  6. Post-MSE you can now run a Convergence function CheckConverg() to see if performance metrics are stable.

  7. The package now contains CSRA a tool for calculating very rough estimates of current depletion and fishing mortality rate from mean catch data.

  8. getAFC is also available that can be used for converting length estimates to age estimates through a stochastic growth model.

  9. The value of information function (VOI) was bugged in version 2.0. This has now been fixed.

  10. Users can now send their own parameter values to the runMSE function allowing outputs from stock assessments or correlated parameters (e.g. K and age at maturity) values.

  11. After deliberation, Pope’s approximation has been used to account for intra-year mortality (ie TACs are taken from biomass at the start of the year subject to half of natural mortality rate). This is probably a reasonable approximation in a data-limited setting: alternative structural assumptions for M are eclipsed by uncertainty in M itself and other operating model parameters such as selectivity and bias in observation of data such as annual catches.

  12. The simulation of length composition data was bugged in version 2.0. The variability in length at age was taken from the observation model. Using the perfect information observation model therefore led to no variabiltiy in length at age and hence very odd length composition data. This has been solved and for now a fixed 10 per cent CV in length at age is assumed (normally distributed).

  13. A bug with delay-difference MPs has been fixed (DD and DD4010) in which stochastic TACs were sampled when reps =1. This should just be the mean estimate. The result is that DD is much less variable between years but comes with less contrast in the data. In addition to the much less variable catch recommendations, long term mean performance of the MP is reduced while medium term peformance has been improved.

  14. In the move to length-based inputs it is possible to prescribe wild biases for maximum length and length at maturity. In this version these sampled biases are not correlated so it is possible to create simulated data sets where maximum length is lower than length at 95 percent maturity and length at 50 per cent maturity. We put a hard ceiling on this such that length at 95 percent maturity must be below 90 percent of maximum length and length at 50 percent maturity must be below 90 percent of length at 95 percent maturity. This isn’t great and this will be improved for v2.11

  15. The package now works without initiating a cluster sfInit().

  16. A simple modification to DCAC has been added EDCAC (Harford and Carruthers 2015) that better accounts for absolute stock depletion.

  17. Three new slots are available to run MPs on that related to mean length of catches (ML), modal length of captures (Lc), and the mean length of catches of fish over Lc (Lbar)

2.7 New to version 2.0

  1. Much has changed in package terminology to make the package more generally applicable. For example, OFL (overfishing limits, FMSY x current biomass), now belongs to a larger class of TACs (Total Allowable Catches).

  2. There are now just two classes of DLM MPs, DLM_output (MPs linked to output controls e.g. TACs) and DLM_input (MPs linked to input controls such as time-area closures, age selectivity and effort). The new DLM_input function classes have four components, fractional reallocation of spatial effort, fraction of effort in final historical year prescribed in the current year, spatial limits on fishing mortality and a user-defined age-selectivity curve. For example, given an hypothetical stock with 8 age classes a DLM_input method might return a vector c(0.5, 0.8, 0,1, 0,0,0,0,1,1,1,1). This is interpreted as a 50 percent reallocation (Allocation = 0.5) of spatial effort, with a total effort that is 80 percent of historical levels (Effort = 0.8) with a closure in area 1 and full fishing in area 2 (Spatial = c(0,1)) and knife-edge selectivity at age class 5 (Selectivity = c(0,0,0,0,1,1,1,1)) [note that Selectivity has changed in newer versions of the package]. To demonstrate this new feature there are four new input controls, current effort (curE), 75 percent of current effort (curE75), age selectivity that matches the maturity ogive (matagelim) and a marine reserve in area 1 (area1MR) [note that matagelim has changed to matlenlim in recent versions].

  3. A ‘dumb’ MP has been added: Mean Catch Depletion (MCD) that simply calculates a TAC based on mean catches and depletion ie depletion x 2 x mean catch. This is to demonstrate the (theoretically) very high information content of a reliable estimate of current stock depletion.

  4. A better length composition simulator has been added. Note that this still renews the normal length structure between ages and does not properly simulate the higher mortality rate of larger, faster growing fish (a growth type group simulator is on its way).

  5. Help documentation has been much improved including complete guides for Fleet, Stock, Observation and MSE objects. Eg class?MSE

  6. Minor bugs have been found with the help of Helena Geromont including a problem with update intervals of 1 and low simulated steepness values.

  7. Reliability is much improved following a full combinatorial test of all Fleet, Stock, Observation objects against all MPs.

  8. A dedicated Value of information function is now available for MSE objects: VOI(MSEobject) which is smarter than the former version which was included in plot(MSE object class).

  9. Plotting functions have been improved, particularly Tplot, Kplot, Pplot and plot(DLM_data object class)

  10. SPmod has been robustified to stop strongly negative surplus production estimates from leading to erratic behavior.

  11. The butterfish stock type now has less variable recruitment and slightly lower natural mortality rate as previous values were rather extreme and lead to data generation errors (with natural mortality rate as high as 0.9, butterfish is right at the limit of what can be simulated reasonably with an annual age-structured operating model)

2.8 Coming soon to v3.02

Worm plots (likelihood of obtaining biomass reference levels)

Data-rich assessment MPs (for comparison)

3 Prerequisites

At the start of every session there are a few things to do: load the DLMtool library, make data available and set up parallel computing.

3.1 Loading the library

library(DLMtool)
#> Loading required package: snowfall
#> Loading required package: snow
#> Loading required package: boot
#> Loading required package: MASS
#> Loading required package: parallel
#> 
#> Attaching package: 'parallel'
#> The following objects are masked from 'package:snow':
#> 
#>     clusterApply, clusterApplyLB, clusterCall, clusterEvalQ,
#>     clusterExport, clusterMap, clusterSplit, makeCluster,
#>     parApply, parCapply, parLapply, parRapply, parSapply,
#>     splitIndices, stopCluster

3.2 Unpacking the data

A list object DLMdat is unpacked which puts all objects and data in the current workspace.

for(i in 1:length(DLMdat))assign(DLMdat[[i]]@Name,DLMdat[[i]])

3.2.1 Initiating the cluster

Note that most computers make use of hyperthreading technology so a quad-core PC has 8 threads, this is set to 2 here to meet CRAN-R package submission requirements.

sfInit(parallel=TRUE, cpus=2) 

You can automatically detect the number of threads using detectCores(). Ie type sfInit(parallel=T,detectCores()).

3.3 Exporting all data and objects to the cluster

In order to make all DLMtool functions and objects available for parallel processing we export them to the cluster.

sfExportAll()

3.4 Set a random seed

In order to make results presented here reproducible, we set a random seed for this R session.

set.seed(1) 

4 Quick start

Here is a quick demonstration of core DLMtool functionality.

4.1 Define an operating model

The operating model is the ‘simulated reality’: a series of known simulations for testing various data-limited MPs. Operating models can either be specified in detail according to each variable (e.g. sample natural mortality rate between 0.2 and 0.3) or alternatively the user can rapidly construct an operating model based on a set of predefined Stock, Fleet and Observation models. In this case we take the latter approach and pick the Blue\_shark stock type, a Generic fleet type and an observation model that generates data that can be both imprecise and biased.

OM <- new('OM', Blue_shark, Generic_fleet, Imprecise_Biased)

The operating model class ‘OM’ has many different slots which control the ranges of population and fleet parameters that may be sampled in addition to parameters that control the quality of the data simulated. You can list these using slotNames()

slotNames(OM)
#>  [1] "Name"         "nyears"       "maxage"       "R0"          
#>  [5] "M"            "Msd"          "Mgrad"        "h"           
#>  [9] "SRrel"        "Linf"         "K"            "t0"          
#> [13] "Ksd"          "Kgrad"        "Linfsd"       "Linfgrad"    
#> [17] "recgrad"      "a"            "b"            "D"           
#> [21] "Size_area_1"  "Frac_area_1"  "Prob_staying" "Source"      
#> [25] "L50"          "L50_95"       "SelYears"     "AbsSelYears" 
#> [29] "L5"           "LFS"          "Vmaxlen"      "L5Lower"     
#> [33] "L5Upper"      "LFSLower"     "LFSUpper"     "VmaxLower"   
#> [37] "VmaxUpper"    "isRel"        "beta"         "Spat_targ"   
#> [41] "Fsd"          "Period"       "Amplitude"    "EffYears"    
#> [45] "EffLower"     "EffUpper"     "qinc"         "qcv"         
#> [49] "AC"           "Cobs"         "Cbiascv"      "CAA_nsamp"   
#> [53] "CAA_ESS"      "CAL_nsamp"    "CAL_ESS"      "CALcv"       
#> [57] "Iobs"         "Perr"         "Mcv"          "Kcv"         
#> [61] "t0cv"         "Linfcv"       "LFCcv"        "LFScv"       
#> [65] "B0cv"         "FMSYcv"       "FMSY_Mcv"     "BMSY_B0cv"   
#> [69] "LenMcv"       "rcv"          "Dbiascv"      "Dcv"         
#> [73] "Btbias"       "Btcv"         "Fcurbiascv"   "Fcurcv"      
#> [77] "hcv"          "Icv"          "maxagecv"     "Reccv"       
#> [81] "Irefcv"       "Crefcv"       "Brefcv"

or can look up the help file entry:

class?OM

4.2 Define a subset of data-limited MPs

There are three different types of MP currently included in DLMtool: DLM_output (output controls, e.g. a TAC), DLM_input (size/age/spatial controls). In this example we use a generic class finder avail to list all available methods of class ‘DLM_output’:

avail('DLM_output')
#>  [1] "AvC"         "BK"          "BK_CC"       "BK_ML"       "CC1"        
#>  [6] "CC4"         "CompSRA"     "CompSRA4010" "DAAC"        "DBSRA"      
#> [11] "DBSRA4010"   "DBSRA_40"    "DBSRA_ML"    "DCAC"        "DCAC4010"   
#> [16] "DCAC_40"     "DCAC_ML"     "DD"          "DD4010"      "DepF"       
#> [21] "DynF"        "FMSYref"     "FMSYref50"   "FMSYref75"   "Fadapt"     
#> [26] "Fdem"        "Fdem_CC"     "Fdem_ML"     "Fratio"      "Fratio4010" 
#> [31] "Fratio_CC"   "Fratio_ML"   "GB_CC"       "GB_slope"    "GB_target"  
#> [36] "Gcontrol"    "HDAAC"       "IT10"        "IT5"         "ITM"        
#> [41] "Islope1"     "Islope4"     "Itarget1"    "Itarget4"    "LBSPR_ItTAC"
#> [46] "LstepCC1"    "LstepCC4"    "Ltarget1"    "Ltarget4"    "MCD"        
#> [51] "MCD4010"     "NFref"       "Rcontrol"    "Rcontrol2"   "SBT1"       
#> [56] "SBT2"        "SPMSY"       "SPSRA"       "SPSRA_ML"    "SPmod"      
#> [61] "SPslope"     "YPR"         "YPR_CC"      "YPR_ML"

and select some for simulation testing:

MPs <- c("Fratio", "DCAC", "Fdem", "DD")    

To find out more about these MPs you can use the built-in R help functions. E.g:

?Fratio
?DBSRA

Or simply view the code. E.g:

Fratio
#> function (x, DLM_data, reps = 100) 
#> {
#>     depends = "DLM_data@Abun,DLM_data@CV_Abun,DLM_data@FMSY_M, DLM_data@CV_FMSY_M,DLM_data@Mort,DLM_data@CV_Mort"
#>     Ac <- trlnorm(reps, DLM_data@Abun[x], DLM_data@CV_Abun[x])
#>     TACfilter(Ac * trlnorm(reps, DLM_data@Mort[x], DLM_data@CV_Mort[x]) * 
#>         trlnorm(reps, DLM_data@FMSY_M[x], DLM_data@CV_FMSY_M[x]))
#> }
#> <environment: namespace:DLMtool>
#> attr(,"class")
#> [1] "DLM_output"

4.3 Run an MSE and plot results

The MPs can now be tested using the operating model. NOTE that this is just a demonstration, in a real MSE you should use many more simulations (nsim more than 200), reps (samples per method more than 100) and perhaps a more frequent assessment interval (interval of 2 or 3 years). Note that when reps is set to 1, all stochastic MPs use the mean value of an input and do not sample from the distribution according to the specified CV (the DLM_output MPs become deterministic and no longer produce samples of the TAC recommendation).

SnapMSE <- runMSE(OM, MPs, nsim=16, reps=1, proyears=30, interval=10)

The generic plot method provides (1) overfishing trajectories (2) Kobe plots and (3) trade-off plots of expected (mean) performance of the MPs in terms of stock status, overfishing and yield.

plot(SnapMSE)

You can access these plots individually: trade-offs - Tplot(), overfishing trajectories - Pplot() and Kobe plots - Kplot()

4.4 Applying MPs to real data

A number of real DLM data-objects (class DLM_data) were loaded into the workspace at the start of this session. In this section we examine a real data object and apply data-limited MPs to these data. Just like the operating model we can find all the objects of real data class DLM\_data:,

avail('DLM_data')
#> [1] "ourReefFish"        "Atlantic_mackerel"  "China_rockfish"    
#> [4] "Cobia"              "Example_datafile"   "Gulf_blue_tilefish"
#> [7] "Red_snapper"        "Simulation_1"

we can list the slots of a DLM_data object:

slotNames(China_rockfish)
#>  [1] "Name"       "Year"       "Cat"        "Ind"        "Rec"       
#>  [6] "t"          "AvC"        "Dt"         "Mort"       "FMSY_M"    
#> [11] "BMSY_B0"    "Cref"       "Bref"       "Iref"       "L50"       
#> [16] "L95"        "LFC"        "LFS"        "CAA"        "Dep"       
#> [21] "Abun"       "vbK"        "vbLinf"     "vbt0"       "wla"       
#> [26] "wlb"        "steep"      "CV_Cat"     "CV_Dt"      "CV_AvC"    
#> [31] "CV_Ind"     "CV_Mort"    "CV_FMSY_M"  "CV_BMSY_B0" "CV_Cref"   
#> [36] "CV_Bref"    "CV_Iref"    "CV_Rec"     "CV_Dep"     "CV_Abun"   
#> [41] "CV_vbK"     "CV_vbLinf"  "CV_vbt0"    "CV_L50"     "CV_LFC"    
#> [46] "CV_LFS"     "CV_wla"     "CV_wlb"     "CV_steep"   "sigmaL"    
#> [51] "MaxAge"     "Units"      "Ref"        "Ref_type"   "Log"       
#> [56] "params"     "PosMPs"     "MPs"        "OM"         "Obs"       
#> [61] "TAC"        "TACbias"    "Sense"      "CAL_bins"   "CAL"       
#> [66] "MPrec"      "MPeff"      "ML"         "Lbar"       "Lc"        
#> [71] "LHYear"     "Misc"

and also look up this class in the help file:

class?DLM_data

DLMtool includes functions to interrogate a real data object to see what methods can be applied, those that cannot and also what data are needed to get those methods working:

Can(China_rockfish)
#>  [1] "AvC"        "CC1"        "CC4"        "DAAC"       "DCAC"      
#>  [6] "DCAC4010"   "DCAC_40"    "HDAAC"      "NFref"      "MRnoreal"  
#> [11] "MRreal"     "curE"       "curE75"     "matlenlim"  "matlenlim2"
Cant(China_rockfish)
#>       [,1]          [,2]                    
#>  [1,] "BK"          "Produced all NA scores"
#>  [2,] "BK_CC"       "Insufficient data"     
#>  [3,] "BK_ML"       "Insufficient data"     
#>  [4,] "CompSRA"     "Insufficient data"     
#>  [5,] "CompSRA4010" "Insufficient data"     
#>  [6,] "DBSRA"       "Produced all NA scores"
#>  [7,] "DBSRA4010"   "Produced all NA scores"
#>  [8,] "DBSRA_40"    "Produced all NA scores"
#>  [9,] "DBSRA_ML"    "Produced all NA scores"
#> [10,] "DCAC_ML"     "Insufficient data"     
#> [11,] "DD"          "Insufficient data"     
#> [12,] "DD4010"      "Insufficient data"     
#> [13,] "DepF"        "Produced all NA scores"
#> [14,] "DynF"        "Insufficient data"     
#> [15,] "FMSYref"     "Insufficient data"     
#> [16,] "FMSYref50"   "Insufficient data"     
#> [17,] "FMSYref75"   "Insufficient data"     
#> [18,] "Fadapt"      "Insufficient data"     
#> [19,] "Fdem"        "Insufficient data"     
#> [20,] "Fdem_CC"     "Insufficient data"     
#> [21,] "Fdem_ML"     "Insufficient data"     
#> [22,] "Fratio"      "Produced all NA scores"
#> [23,] "Fratio4010"  "Produced all NA scores"
#> [24,] "Fratio_CC"   "Insufficient data"     
#> [25,] "Fratio_ML"   "Insufficient data"     
#> [26,] "GB_CC"       "Produced all NA scores"
#> [27,] "GB_slope"    "Insufficient data"     
#> [28,] "GB_target"   "Insufficient data"     
#> [29,] "Gcontrol"    "Insufficient data"     
#> [30,] "IT10"        "Insufficient data"     
#> [31,] "IT5"         "Insufficient data"     
#> [32,] "ITM"         "Insufficient data"     
#> [33,] "Islope1"     "Insufficient data"     
#> [34,] "Islope4"     "Insufficient data"     
#> [35,] "Itarget1"    "Insufficient data"     
#> [36,] "Itarget4"    "Insufficient data"     
#> [37,] "LBSPR_ItTAC" "Produced all NA scores"
#> [38,] "LstepCC1"    "Insufficient data"     
#> [39,] "LstepCC4"    "Insufficient data"     
#> [40,] "Ltarget1"    "Insufficient data"     
#> [41,] "Ltarget4"    "Insufficient data"     
#> [42,] "MCD"         "Produced all NA scores"
#> [43,] "MCD4010"     "Produced all NA scores"
#> [44,] "Rcontrol"    "Insufficient data"     
#> [45,] "Rcontrol2"   "Insufficient data"     
#> [46,] "SBT1"        "Insufficient data"     
#> [47,] "SBT2"        "Produced all NA scores"
#> [48,] "SPMSY"       "Insufficient data"     
#> [49,] "SPSRA"       "Insufficient data"     
#> [50,] "SPSRA_ML"    "Insufficient data"     
#> [51,] "SPmod"       "Insufficient data"     
#> [52,] "SPslope"     "Insufficient data"     
#> [53,] "YPR"         "Insufficient data"     
#> [54,] "YPR_CC"      "Insufficient data"     
#> [55,] "YPR_ML"      "Insufficient data"     
#> [56,] "DDe"         "Insufficient data"     
#> [57,] "DDe75"       "Insufficient data"     
#> [58,] "DDes"        "Insufficient data"     
#> [59,] "DTe40"       "Insufficient data"     
#> [60,] "DTe50"       "Insufficient data"     
#> [61,] "ITe10"       "Insufficient data"     
#> [62,] "ITe5"        "Insufficient data"     
#> [63,] "ItargetE1"   "Insufficient data"     
#> [64,] "ItargetE4"   "Insufficient data"     
#> [65,] "LBSPR_ItEff" "Produced all NA scores"
#> [66,] "LBSPR_ItSel" "Produced all NA scores"
#> [67,] "LstepCE1"    "Insufficient data"     
#> [68,] "LstepCE2"    "Insufficient data"     
#> [69,] "LtargetE1"   "Insufficient data"     
#> [70,] "LtargetE4"   "Insufficient data"     
#> [71,] "slotlim"     "Insufficient data"
Needed(China_rockfish)
#>  [1] "BK: LFC, Abun, vbK, vbLinf"                                                 
#>  [2] "BK_CC: LFC, CAA, vbK, vbLinf"                                               
#>  [3] "BK_ML: LFC, vbK, vbLinf, CAL"                                               
#>  [4] "CompSRA: L50, LFC, LFS, CAA, vbK, vbLinf, vbt0, wla, wlb, steep, MaxAge"    
#>  [5] "CompSRA4010: L50, LFC, LFS, CAA, vbK, vbLinf, vbt0, wla, wlb, steep, MaxAge"
#>  [6] "DBSRA: L50, Dep, vbK, vbLinf, vbt0"                                         
#>  [7] "DBSRA4010: L50, Dep, vbK, vbLinf, vbt0"                                     
#>  [8] "DBSRA_40: L50, Dep, vbK, vbLinf, vbt0"                                      
#>  [9] "DBSRA_ML: L50, Dep, vbK, vbLinf, vbt0, CAL"                                 
#> [10] "DCAC_ML: vbK, vbLinf, CAL"                                                  
#> [11] "DD: Ind, L50, vbK, vbLinf, vbt0, wla, wlb, MaxAge"                          
#> [12] "DD4010: Ind, L50, vbK, vbLinf, vbt0, wla, wlb, MaxAge"                      
#> [13] "DepF: Dep, Abun"                                                            
#> [14] "DynF: Ind, Abun"                                                            
#> [15] "FMSYref: OM"                                                                
#> [16] "FMSYref50: OM"                                                              
#> [17] "FMSYref75: OM"                                                              
#> [18] "Fadapt: Ind, Abun"                                                          
#> [19] "Fdem: Abun, vbK, vbLinf, vbt0, wla, wlb, steep, MaxAge"                     
#> [20] "Fdem_CC: CAA, vbK, vbLinf, vbt0, wla, wlb, steep, MaxAge"                   
#> [21] "Fdem_ML: vbK, vbLinf, vbt0, wla, wlb, steep, MaxAge, CAL"                   
#> [22] "Fratio: Abun"                                                               
#> [23] "Fratio4010: Dep, Abun"                                                      
#> [24] "Fratio_CC: CAA"                                                             
#> [25] "Fratio_ML: vbK, vbLinf, CAL"                                                
#> [26] "GB_CC: Cref"                                                                
#> [27] "GB_slope: Ind"                                                              
#> [28] "GB_target: Ind, Cref, Iref"                                                 
#> [29] "Gcontrol: Ind, Abun"                                                        
#> [30] "IT10: Ind, Iref, MPrec"                                                     
#> [31] "IT5: Ind, Iref, MPrec"                                                      
#> [32] "ITM: Ind, Iref, MPrec"                                                      
#> [33] "Islope1: Ind, MPrec"                                                        
#> [34] "Islope4: Ind, MPrec"                                                        
#> [35] "Itarget1: Ind"                                                              
#> [36] "Itarget4: Ind"                                                              
#> [37] "LBSPR_ItTAC: L50, L95, vbK, vbLinf, wlb, steep, CAL, MPrec"                 
#> [38] "LstepCC1: MPrec, ML"                                                        
#> [39] "LstepCC4: MPrec, ML"                                                        
#> [40] "Ltarget1: ML"                                                               
#> [41] "Ltarget4: ML"                                                               
#> [42] "MCD: Dep"                                                                   
#> [43] "MCD4010: Dep"                                                               
#> [44] "Rcontrol: Ind, Dep, Abun, vbK, vbLinf, vbt0, steep, MaxAge"                 
#> [45] "Rcontrol2: Ind, Dep, Abun, vbK, vbLinf, vbt0, steep, MaxAge"                
#> [46] "SBT1: Ind"                                                                  
#> [47] "SBT2: Rec, Cref"                                                            
#> [48] "SPMSY: L50, vbK, vbLinf, vbt0, MaxAge"                                      
#> [49] "SPSRA: Dep, vbK, vbLinf, vbt0, steep, MaxAge"                               
#> [50] "SPSRA_ML: vbK, vbLinf, vbt0, steep, MaxAge, CAL"                            
#> [51] "SPmod: Ind, Abun"                                                           
#> [52] "SPslope: Ind, Abun"                                                         
#> [53] "YPR: LFS, Abun, vbK, vbLinf, vbt0, wla, wlb, MaxAge"                        
#> [54] "YPR_CC: LFS, CAA, vbK, vbLinf, vbt0, wla, wlb, MaxAge"                      
#> [55] "YPR_ML: LFS, vbK, vbLinf, vbt0, wla, wlb, MaxAge, CAL"                      
#> [56] "DDe: Ind, L50, vbK, vbLinf, vbt0, wla, wlb, MaxAge"                         
#> [57] "DDe75: Ind, L50, vbK, vbLinf, vbt0, wla, wlb, MaxAge"                       
#> [58] "DDes: Ind, L50, vbK, vbLinf, vbt0, wla, wlb, MaxAge, MPeff"                 
#> [59] "DTe40: Dep, MPeff"                                                          
#> [60] "DTe50: Dep, MPeff"                                                          
#> [61] "ITe10: Ind, Iref, MPeff"                                                    
#> [62] "ITe5: Ind, Iref, MPeff"                                                     
#> [63] "ItargetE1: Ind, MPeff"                                                      
#> [64] "ItargetE4: Ind, MPeff"                                                      
#> [65] "LBSPR_ItEff: L50, L95, vbK, vbLinf, wlb, steep, CAL, MPeff"                 
#> [66] "LBSPR_ItSel: L50, L95, vbK, vbLinf, wlb, steep, CAL"                        
#> [67] "LstepCE1: MPeff, ML"                                                        
#> [68] "LstepCE2: MPeff, ML"                                                        
#> [69] "LtargetE1: MPeff, ML"                                                       
#> [70] "LtargetE4: MPeff, ML"                                                       
#> [71] "slotlim: L50, LFC, LFS, vbLinf"

The function TAC() automatically detects which MPs can be applied and calculates a TAC distribution for each MP, which can then be plotted.

RockReal<-TAC(China_rockfish)
plot(RockReal)

4.5 Conduct a sensitivity analysis

The sensitivity plot reveals which inputs to an MP most strongly affect the TAC recommendation. In principle this may help to focus data discussion on the most critical inputs and their credibility.

RockReal <- Sense(RockReal,"DCAC")

5 From MSE to management recommendations

In this section we take a more thorough, systematic approach to MSE and data implementation. This is an example of how DLMtool may be used to select MPs and then apply them to real data. This is intended to be a straw-man demonstration and in no way should this be interpreted as a recommendation about appropriate management objectives!

In this example our real-life stock is a moderately long-lived reef fish species of moderately high recruitment compensation that has been subject to fairly consistent fishing pressure over recent years. We suspect that fishing activities do not effectively operate on older age classes since the fish exhibit ontogenetic movements offshore where there is less fishing. In general the stock is thought to be a relatively low stock levels going by catch rate observations but frankly, we don’t have a precise handle on stock depletion. Since fishing activities have changed spatial distribution and the stock is targetted there is the potential for hyperstability in our observations of catch rates over time.

This section assumes that you have completed the prerequisites.

5.1 Building an appropriate operating model

We start by specifying an operating model. Looking at the pre-built stock objects:

avail('Stock')
#>  [1] "Albacore"          "Blue_shark"        "Bluefin_tuna"     
#>  [4] "Bluefin_tuna_WAtl" "Butterfish"        "Herring"          
#>  [7] "Mackerel"          "Porgy"             "Rockfish"         
#> [10] "Snapper"           "Sole"              "Toothfish"

We decide that the Snapper stock object is the closest fit.

ourstock <- Snapper

We make some modifications to better suit our particular case study such as higher natural mortality rate, stock depletion (D) between 5 and 30 per cent of unfished levels and a candidate MPA (between 5 - 15 percent of unfished biomass) with retention (probability of staying in the MPA) of 80 - 99 percent. Remember to get help on the OM objects and their slots type class?OM (or the components of the OM: class?Stock, class?Fleet, class?Observation) at the command line.

ourstock@M <- c(0.2,0.25)
ourstock@maxage <- 18
ourstock@D <- c(0.05,0.3)
ourstock@Frac_area_1 <- c(0.05,0.15)
ourstock@Prob_staying <- c(0.4,0.99)

We now choose a fleet type for our operating model and choose to modify a generic fleet of flat recent effort, adding dome-shaped vulnerability as a possibility for older age classes:

ourfleet <- Generic_FlatE
ourfleet@Vmaxlen <- c(0.5, 1)

Finally, using our fleet and stock objects we construct an operating model object assuming that the data we have are likely to be imprecise and potentially biased. Type avail("Observation") at the command line to see the various pre-defined observation model objects.

ourOM <- new('OM',ourstock, ourfleet, Imprecise_Biased)

5.2 MSE evaluation of methods

Now that we have our operating model we run a trial MSE. In this case we use a very small number of simulations (16, which is low to meet CRAN-R package building requirements) but change the length of the projection and the length of the interval between updates to reflect our stock and management system.

We can see all of the available output control methods using the avail function:

avail("DLM_output")
#>  [1] "AvC"         "BK"          "BK_CC"       "BK_ML"       "CC1"        
#>  [6] "CC4"         "CompSRA"     "CompSRA4010" "DAAC"        "DBSRA"      
#> [11] "DBSRA4010"   "DBSRA_40"    "DBSRA_ML"    "DCAC"        "DCAC4010"   
#> [16] "DCAC_40"     "DCAC_ML"     "DD"          "DD4010"      "DepF"       
#> [21] "DynF"        "FMSYref"     "FMSYref50"   "FMSYref75"   "Fadapt"     
#> [26] "Fdem"        "Fdem_CC"     "Fdem_ML"     "Fratio"      "Fratio4010" 
#> [31] "Fratio_CC"   "Fratio_ML"   "GB_CC"       "GB_slope"    "GB_target"  
#> [36] "Gcontrol"    "HDAAC"       "IT10"        "IT5"         "ITM"        
#> [41] "Islope1"     "Islope4"     "Itarget1"    "Itarget4"    "LBSPR_ItTAC"
#> [46] "LstepCC1"    "LstepCC4"    "Ltarget1"    "Ltarget4"    "MCD"        
#> [51] "MCD4010"     "NFref"       "Rcontrol"    "Rcontrol2"   "SBT1"       
#> [56] "SBT2"        "SPMSY"       "SPSRA"       "SPSRA_ML"    "SPmod"      
#> [61] "SPslope"     "YPR"         "YPR_CC"      "YPR_ML"

and input control methods:

avail("DLM_output")
#>  [1] "AvC"         "BK"          "BK_CC"       "BK_ML"       "CC1"        
#>  [6] "CC4"         "CompSRA"     "CompSRA4010" "DAAC"        "DBSRA"      
#> [11] "DBSRA4010"   "DBSRA_40"    "DBSRA_ML"    "DCAC"        "DCAC4010"   
#> [16] "DCAC_40"     "DCAC_ML"     "DD"          "DD4010"      "DepF"       
#> [21] "DynF"        "FMSYref"     "FMSYref50"   "FMSYref75"   "Fadapt"     
#> [26] "Fdem"        "Fdem_CC"     "Fdem_ML"     "Fratio"      "Fratio4010" 
#> [31] "Fratio_CC"   "Fratio_ML"   "GB_CC"       "GB_slope"    "GB_target"  
#> [36] "Gcontrol"    "HDAAC"       "IT10"        "IT5"         "ITM"        
#> [41] "Islope1"     "Islope4"     "Itarget1"    "Itarget4"    "LBSPR_ItTAC"
#> [46] "LstepCC1"    "LstepCC4"    "Ltarget1"    "Ltarget4"    "MCD"        
#> [51] "MCD4010"     "NFref"       "Rcontrol"    "Rcontrol2"   "SBT1"       
#> [56] "SBT2"        "SPMSY"       "SPSRA"       "SPSRA_ML"    "SPmod"      
#> [61] "SPslope"     "YPR"         "YPR_CC"      "YPR_ML"

If you do not specify a vector of particular methods, the MSE will run for all possible MPs. Note that this could take a few minutes depending on how monstrous your computer is. In this example, we will choose a subset of the available methods:

MPs <- c("BK", "CC1", "CompSRA", "DBSRA", "DBSRA4010", "DCAC", "DCAC4010", "DepF", "DynF",
         "Fratio", "Itarget1", "Itarget4", "MCD", "MCD4010", "SBT1")

Note that in a real setting it might be advisable to increase the number of simulations (nsim) to at least 200 and, if stochastic MPs are to be used, increase the samples per method (reps) to at least 100 for this first stage to obtain stable aggregate results.

ourMSE <- runMSE(ourOM, MPs=MPs, proyears=20, interval=5, nsim=16,reps=1)

A summary trade-off plot reveals a wide range of performance:

Tplot(ourMSE)

In this example process, we decide that we would like to select a targetted subset of these MPs that have greater than 30 percent of long-term best yield (given ideal fixed fishing mortality rate), less than a 50 percent probability of overfishing and less than a 20 percent probability of dropping below a low stock level, in this case 50 percent of BMSY. To do this we calculate the summary table and subset it:

Results <- summary(ourMSE) 
head(Results)
#>          MP Yield stdev   POF stdev    P10 stdev   P50 stdev  P100 stdev
#> 1        BK 60.14 37.66 64.69  44.78 25.94 31.42 62.81 45.02 75.00 39.87
#> 2       CC1 45.41 45.61 67.50  39.79 33.75 30.69 53.75 42.68 74.38 30.87
#> 3   CompSRA 49.22 69.20 81.56  28.39 46.56 32.90 68.12 34.35 85.31 23.70
#> 4     DBSRA 84.19 39.02 30.94  39.21  4.69 18.75 18.12 27.32 58.44 32.23
#> 5 DBSRA4010 83.34 64.79 21.56  25.61  0.31  1.25 15.31 23.63 53.12 33.31
#> 6      DCAC 47.46 31.52 50.31  46.13 18.44 23.29 46.25 43.53 65.00 36.10
#>    LTY  STY   VY
#> 1 48.1 55.6 31.2
#> 2 37.5 85.6 31.2
#> 3 22.5 63.1  6.2
#> 4 71.9 46.9 75.0
#> 5 50.0 37.5 25.0
#> 6 51.9 76.2 87.5
Targetted <- subset(Results, Results$Yield>30 & Results$POF<50 & Results$P50<20)
Targetted
#>           MP Yield stdev   POF stdev   P10 stdev.1   P50 stdev.2  P100
#> 4      DBSRA 84.19 39.02 30.94  39.21 4.69   18.75 18.12   27.32 58.44
#> 5  DBSRA4010 83.34 64.79 21.56  25.61 0.31    1.25 15.31   23.63 53.12
#> 7   DCAC4010 60.94 36.15  4.38  17.50 0.62    2.50  9.69   19.53 38.44
#> 8       DepF 89.54 45.02 26.56  33.00 1.56    4.37 19.38   28.51 51.56
#> 12  Itarget4 72.74 32.23  1.88   5.12 0.00    0.00  8.12    9.81 37.19
#> 13       MCD 73.92 40.64  8.44  23.99 4.69   18.75 11.25   21.95 44.06
#> 14   MCD4010 71.54 39.40  5.00  14.02 0.31    1.25  9.69   19.53 36.88
#>    stdev.3  LTY  STY    VY
#> 4    32.23 71.9 46.9  75.0
#> 5    33.31 50.0 37.5  25.0
#> 7    36.41 61.9 37.5  56.2
#> 8    39.74 77.5 43.8  37.5
#> 12   28.63 75.0 25.0 100.0
#> 13   33.97 65.6 43.8  93.8
#> 14   33.31 65.6 34.4  43.8

Our new subsetted methods can be used to run a more focused MSE that includes a greater number of simulations for a detailed assessment of performance. Again note that in a real setting it would be advisable to increase the number of simulations further to at least 400. You might also want to increase the number of stochastic samples per method (reps) to 200 or more.

TargMP <- Targetted$MP[grep("FMSYref",Targetted$MP,invert=T)]
ourMSE2 <- runMSE(ourOM, TargMP, proyears=20, interval=5, nsim=16, reps=1)

Let’s check convergence in some performance metrics as simulations are added (the first plot is for all MPs, the second shows only those that did not converge):

CheckConverg(ourMSE2)

#> Some MPs may not have converged in 16 iterations (threshold = 2%)
#> MPs are: DBSRA  DBSRA4010  DCAC4010  DepF  Itarget4  MCD  MCD4010
#> MPs #: 1  2  3  4  5  6  7
#>   Num        MP
#> 1   1     DBSRA
#> 2   2 DBSRA4010
#> 3   3  DCAC4010
#> 4   4      DepF
#> 5   5  Itarget4
#> 6   6       MCD
#> 7   7   MCD4010

Several detailed plots can provide greater information about exactly how each MP performed over the projected time period including Projection plots:

Pplot(ourMSE2)

Kobe plots:

Kplot(ourMSE2)

and alternative trade-off plots:

Tplot2(ourMSE2)

TradePlot(ourMSE2, XThresh=c(0,0), YThresh=c(0,0), ShowLabs = TRUE)

#> [[1]]
#>          MP    X    Y
#> 1 DBSRA4010 78.4 83.3
#> 2  Itarget4 98.1 72.7
#> 3       MCD 91.6 73.9
#> 4      DepF 73.4 89.5
#> 5   MCD4010 95.0 71.5
#> 6     DBSRA 69.1 84.2
#> 7  DCAC4010 95.6 60.9
#> 
#> [[2]]
#>          MP    X     Y
#> 1  Itarget4 91.9 100.0
#> 2       MCD 88.8  93.8
#> 3     DBSRA 81.9  81.2
#> 4   MCD4010 90.3  68.8
#> 5      DepF 80.6  68.8
#> 6  DCAC4010 90.3  62.5
#> 7 DBSRA4010 84.7  37.5

5.3 Value of information analysis

A value of information function is available that allows users to establish which of the sampled parameters of the operating model or the observation model are most correlated with performance. This helps to guide future data collection effort to target those inputs that are most critical for performance. The VOI function also provides a metric of the robustness of MPs: while an MP’s aggregate mean performance may be quite good it might be concerning if performance was strongly compromised given alternative plausible scenarios.

The inputs are organized in order of most correlated to least correlated from left to right. The label of the plots indicates a sampled parameter in either ourMSE2@OM (operating model parameters) or ourMSE2@Obs (observation model parameters). The help file for these slots provides details on how to interpret these labels. For example Mbias is bias in the observed value of natural mortality rate, Dbias is bias in the observed value of stock depletion. Note that in a real analysis many more simulations should be undertaken to provide a reliable performance pattern. I.e., nsim should be higher when using runMSE().

VOI(ourMSE2)

#> [[1]]
#>           MP            1        2            3            4            5
#> 1      DBSRA          Esd        K            M Prob_staying         FMSY
#> 2                   39.52    38.39        38.26        37.12        37.11
#> 3  DBSRA4010 Prob_staying     ageM  Frac_area_1       procsd         qinc
#> 4                   67.89    67.72        66.44        62.24         61.1
#> 5   DCAC4010         FMSY        K         lenM        Kgrad Prob_staying
#> 6                   38.86    36.31        33.78        33.49        33.43
#> 7       DepF      recgrad Linfgrad      Vmaxlen    Depletion         qinc
#> 8                   44.19    43.02        42.37        42.09        39.58
#> 9   Itarget4            M    len95  Frac_area_1         FMSY           AC
#> 10                  33.32    32.02         30.8         30.5        30.16
#> 11       MCD            K     FMSY Prob_staying       procsd          Esd
#> 12                  44.07    43.51        41.57        41.09        40.49
#> 13   MCD4010  Frac_area_1     FMSY          LFC       FMSY_M           L5
#> 14                  42.49    41.31        38.71        38.35        37.74
#>          6
#> 1   Linfsd
#> 2    34.63
#> 3       AC
#> 4    59.03
#> 5  Vmaxlen
#> 6    33.31
#> 7      LFS
#> 8     39.4
#> 9       L5
#> 10   29.83
#> 11    lenM
#> 12   39.42
#> 13     Esd
#> 14   37.03
#> 
#> [[2]]
#>           MP     1           2     3      4          5           6
#> 1      DBSRA Kbias         Csd  Derr t0bias      Cbias  FMSY_Mbias
#> 2            37.27       37.13 36.92     36      35.99       31.94
#> 3  DBSRA4010 Dbias         Csd Cbias   Derr     t0bias       Mbias
#> 4            67.82       58.38 58.36  58.24      58.01       53.26
#> 5   DCAC4010   Csd        Derr Dbias  Cbias FMSY_Mbias BMSY_B0bias
#> 6            36.87       35.13 32.62  31.64      27.77       25.15
#> 7       DepF Mbias BMSY_B0bias Dbias   Derr       Aerr  FMSY_Mbias
#> 8            37.38       35.57  31.4  28.78      28.23       27.42
#> 9   Itarget4   Csd       Cbias   Isd                              
#> 10           31.26       31.14 30.99                              
#> 11       MCD  Derr         Csd Dbias  Cbias                       
#> 12           46.39       36.62 34.77  32.68                       
#> 13   MCD4010   Csd       Dbias Cbias   Derr                       
#> 14           41.99       38.56 34.08  32.29

5.4 Applying MPs to our real data

A real DLM_data object ourReefFish, was loaded into the current R session when we loaded up the data in the Prerequisites section above. We can summarise some of the data in this DLM_data object using the generic function summary():

summary(ourReefFish)

The Can() function reveals that a range of MPs are available, and we can calculate and plot the TACs for the available methods:

ourReefFish <- TAC(ourReefFish)
plot(ourReefFish)

–>

If we focus on output controls there are a cluster of MPs that offer comparable performance that are available for our real data such as the Mean Catch Depletion method (MCD). We can use sensitivity testing to better understand how fragile TAC recommendations are to changes in our data inputs:

ourReefFish <- Sense(ourReefFish,'MCD')

Not suprisingly, the Mean Catch Depletion method is sensitive to the error in historical catch and current depletion.

5.5 What have we learned?

In this simple walkthrough we have established what MPs work best for our stock, fishery and observation type. It was possible to establish the frailties of these MPs by examining what simulated parameters drive yield and probability of overfishing (using the VOI() function). Our application to real data produced actual TAC recommendations for MPs that were available. Several MPs could not be applied, and the MSE results can be used to evaluate whether these MPs are likely to provide benefits in terms of both yield and limiting overfishing (remember that with such a small number of simluations in this example, these results are not reliable!). We know what data are necessary to make these work but have yet to decide whether collecting these data is worthwhile. Above all, the approach is transparent and reproducible.

Depending on how utility is characterised, it may be possible to establish the cost-efficacy of future data-collection based on the long-term yield differential of the methods that are available and those that need additional data.

6 Designing new methods

DLMtool was designed to be extensible in order to promote the development of new MPs. In this section we design a series of new MPs that include spatial controls and input controls in the form of age-restrictions. The central requirement of any MP is that it can be applied to a DLM_data object using the function sapply (sfSapply() in parallel processing).

DLM_data objects have a single position x for each data entry, e.g. one value for natural mortality rate, a single vector of historical catches etc. In the MSE analysis this is extended to nsim positions. It follows that any MP arranged to function sapply(x,MP,DLM\_data) will work. For example we can get 5 stochastic samples of the TAC for the demographic FMSY MP paired to catch-curve analysis FdemCC applied to a real data-limited data object for red snapper using:

sapply(1,Fdem_CC,Red_snapper,reps=5)
#>          [,1]
#> [1,] 12.94103
#> [2,] 22.61339
#> [3,] 16.68597
#> [4,] 13.58120
#> [5,] 23.48501

The MSE just populates a DLM_data object with many simulations and uses sfSapply() (snowfall cluster computing equivalent) to calculate an management recommendation for each simulation. By making methods compatible with this standard the very same equations are used in both the MSE and the real management advice.

The following new MPs illustrate this.

6.1 Average historical catch MP

The average historical catch has been suggested as a starting point for setting TACs in the most data-limited situations (following Restrepo et al. 1998). Here we design such an MP:

AvC <-function(x, DLM_data, reps)rlnorm(reps, log(mean(DLM_data@Cat[x,], na.rm=T)), 0.1) 

Note that all MPs have to be stochastic in this framework which is why we sample from a log-normal distribution with a CV of roughly 10 per cent.

Before the MP can be ‘seen’ by the rest of the DLM package we have to do three more things. The MP must be assigned a class based on what outputs it provides. Since this is an output control (TAC) based MP we assign it class DLM_output. The MP must also be assigned to the DLMtool namespace:

class(AvC) <-"DLM_output"
environment(AvC) <-asNamespace('DLMtool')

and - if we are using parallel computing - exported to the cluster:

sfExport("AvC")

6.2 Third-highest catch

In some data-limited settings third highest historical catch has been suggested as a possible catch-limit. Here we use a similar approach to the average catch MP above (AvC) and take draws from a log-normal distribution with CV of 10 per cent:

THC<-function(x,DLM_data, reps){
  rlnorm(reps,log(DLM_data@Cat[x,order(DLM_data@Cat[x,],decreasing=T)[3]]),0.1)
}
class(THC)<-"DLM_output"
environment(THC) <- asNamespace('DLMtool')

and again export to cluster (if we are using paralllel computing):

sfExport("THC")

6.3 Length-at-selection set equal to length-at-maturity

To simulate input controls that aim to alter the length-vulnerability to fishing it is possible to design an MP of class DLM\_input. These simply describe sets the length at 5% selection and smallest length at full selection. In this example we set selectivity equal to the maturity curve:

matlenlim <- function (x, DLM_data, ...) {
    dependencies = "DLM_data@LFC, DLM_data@LFS"
    Allocate <- 1
    Effort <- 1
    Spatial <- c(1, 1)
    newLFC <- DLM_data@L50[x] * 0.95
    newLFS <- DLM_data@L50[x]
    Vuln <- c(newLFC, newLFS)
    c(Allocate, Effort, Spatial, Vuln)
}
class(matlenlim) <- "DLM_input"
environment(matlenlim) <- asNamespace("DLMtool")

and export to cluster:

sfExport("matlenlim")

Note that for compatibility, these approaches still require an ‘x’ argument even if they don’t make use of it (i.e., they are the same regardless of the data or simulated data).
Also note that the arguments for the input methods must include either reps or ..., even if these are not used.

6.4 Reducing fishing rate in area 1 by 50 per cent

Spatial controls operate similarly to the age/size based controls: a vector of length 2 (the spatial simulator is a 2-box model) that indicates the fraction of current spatial catches. In this example we reduce catches in area 1 by 50 percent and assign the MP class ‘DLM space’.

area1_50<-function(x,DLM_data, ...){ 
  Allocate<-0 # Fraction of effort reallocated to open area
  Effort<-1  # Fraction of effort in last historical year
  Spatial<-c(0.5,1) # Fraction of effort found in each area
  Vuln<-rep(NA,2) # Length vulnerability is not specified   
  c(Allocate, Effort, Spatial, Vuln) # Input controls stitched togther
}
class(area1_50)<-"DLM_input"
environment(area1_50) <- asNamespace('DLMtool')
sfExport("area1_50")

6.5 Applying the new MPs

Our MPs are now compatible with all of the DLMtool functionality. Let’s run a quick MSE and see how they fare:

new_MPs <- c("AvC","THC","matlenlim","area1_50")
OM <- new('OM',Porgy, Generic_IncE, Imprecise_Unbiased)
PorgMSE <- runMSE(OM,new_MPs,maxF=1,nsim=20,reps=1,proyears=20,interval=5) 
Tplot(PorgMSE)  

What if starting depletion were different, e.g. likely to be under BMSY?

OM@D
OM@D <- c(0.05,0.3)
PorgMSE2 <- runMSE(OM, new_MPs, maxF=1, nsim=20, reps=1, proyears=20, interval=5)
Tplot(PorgMSE2)  

Conveniently putting aside the likelihood of implementing a perfect knife-edge vulnerability at length-at-maturity, it appears that we have a winner in matlenlim even under different starting depletion levels. Third highest catch on the other hand appears risky to say the least. You could try some other starting depletion levels to see under what circumstances the trade-off space changes dramatically.

7 Managing real data

DLMtool has a series of functions to make importing data and applying data-limited MPs relatively straightforward. There are two approaches: (1) fill out a .csv data file in excel or a text editor and use a DLMtool function to create a properly formatted DLM_data object (class DLM_data) or (2) create a blank DLM_data object in R and populate it in R.

7.1 Importing data

Probably the easiest way to get your data into the DLMtool is to populate a .csv datafile. These files have a line for each slot of the DLM_data object e.g:

slotNames('DLM_data')
#>  [1] "Name"       "Year"       "Cat"        "Ind"        "Rec"       
#>  [6] "t"          "AvC"        "Dt"         "Mort"       "FMSY_M"    
#> [11] "BMSY_B0"    "Cref"       "Bref"       "Iref"       "L50"       
#> [16] "L95"        "LFC"        "LFS"        "CAA"        "Dep"       
#> [21] "Abun"       "vbK"        "vbLinf"     "vbt0"       "wla"       
#> [26] "wlb"        "steep"      "CV_Cat"     "CV_Dt"      "CV_AvC"    
#> [31] "CV_Ind"     "CV_Mort"    "CV_FMSY_M"  "CV_BMSY_B0" "CV_Cref"   
#> [36] "CV_Bref"    "CV_Iref"    "CV_Rec"     "CV_Dep"     "CV_Abun"   
#> [41] "CV_vbK"     "CV_vbLinf"  "CV_vbt0"    "CV_L50"     "CV_LFC"    
#> [46] "CV_LFS"     "CV_wla"     "CV_wlb"     "CV_steep"   "sigmaL"    
#> [51] "MaxAge"     "Units"      "Ref"        "Ref_type"   "Log"       
#> [56] "params"     "PosMPs"     "MPs"        "OM"         "Obs"       
#> [61] "TAC"        "TACbias"    "Sense"      "CAL_bins"   "CAL"       
#> [66] "MPrec"      "MPeff"      "ML"         "Lbar"       "Lc"        
#> [71] "LHYear"     "Misc"

You do not have to enter data for every line of the data file, if data are not available simply put an ‘NA’ next to any given field. A number of example .csv files can be found in the directory where the DLMtool package was installed:

DLMDataDir()
#> [1] "C:/Users/Adrian/AppData/Local/Temp/Rtmp8K1lza/Rinst580344460bf/DLMtool/"

To get data from a .csv file you need only specify its location e.g new('DLM_data',"I:/Mackerel.csv").

7.2 Populating a DLM_data object in R

Alternatively you can create a blank DLM_data object and fill the slots directly in R. E.g:

Madeup<-new('DLM_data')                             #  Create a blank DLM object
#> [1] "Couldn't find specified csv file, blank DLM object created"
Madeup@Name<-'Test'                                 #  Name it
Madeup@Cat<-matrix(20:11*rlnorm(10,0,0.2),nrow=1)   #  Generate fake catch data
Madeup@Units<-"Million metric tonnes"               #  State units of catch
Madeup@AvC<-mean(Madeup@Cat)                        #  Average catches for time t (DCAC)
Madeup@t<-ncol(Madeup@Cat)                          #  No. yrs for Av. catch (DCAC)
Madeup@Dt<-0.5                                      #  Depletion over time t (DCAC)
Madeup@Dep<-0.5                                     #  Depletion relative to unfished 
Madeup@vbK<-0.2                                     #  VB maximum growth rate
Madeup@vbt0<-(-0.5)                                 #  VB theoretical age at zero length
Madeup@vbLinf<-200                                  #  VB maximum length
Madeup@Mort<-0.1                                    #  Natural mortality rate
Madeup@Abun<-200                                    #  Current abundance
Madeup@FMSY_M<-0.75                                 #  Ratio of FMSY/M
Madeup@L50<-100                                     #  Length at 50% maturity
Madeup@L95<-120                                     #  Length at 95% maturity
Madeup@BMSY_B0<-0.35                                #  BMSY relative to unfished

7.3 Working with DLM_data objects

A generic summary function is available to visualize the data in a DLM_data object:

summary(Atlantic_mackerel)

You can see what MPs can and can’t be applied given your data and also what data are needed to get MPs working:

Can(Atlantic_mackerel)
#>  [1] "AvC"        "BK"         "CC1"        "CC4"        "DAAC"      
#>  [6] "DBSRA"      "DBSRA4010"  "DBSRA_40"   "DCAC"       "DCAC4010"  
#> [11] "DCAC_40"    "DD"         "DD4010"     "DepF"       "DynF"      
#> [16] "Fadapt"     "Fdem"       "Fratio"     "Fratio4010" "GB_slope"  
#> [21] "Gcontrol"   "HDAAC"      "Itarget1"   "Itarget4"   "MCD"       
#> [26] "MCD4010"    "NFref"      "Rcontrol"   "Rcontrol2"  "SBT1"      
#> [31] "SPMSY"      "SPSRA"      "SPmod"      "SPslope"    "YPR"       
#> [36] "THC"        "DDe"        "DDe75"      "DTe40"      "DTe50"     
#> [41] "ItargetE1"  "ItargetE4"  "MRnoreal"   "MRreal"     "curE"      
#> [46] "curE75"     "matlenlim"  "matlenlim2" "slotlim"    "area1_50"
Cant(Atlantic_mackerel)
#>       [,1]          [,2]                    
#>  [1,] "BK_CC"       "Insufficient data"     
#>  [2,] "BK_ML"       "Insufficient data"     
#>  [3,] "CompSRA"     "Insufficient data"     
#>  [4,] "CompSRA4010" "Insufficient data"     
#>  [5,] "DBSRA_ML"    "Insufficient data"     
#>  [6,] "DCAC_ML"     "Insufficient data"     
#>  [7,] "FMSYref"     "Insufficient data"     
#>  [8,] "FMSYref50"   "Insufficient data"     
#>  [9,] "FMSYref75"   "Insufficient data"     
#> [10,] "Fdem_CC"     "Insufficient data"     
#> [11,] "Fdem_ML"     "Insufficient data"     
#> [12,] "Fratio_CC"   "Insufficient data"     
#> [13,] "Fratio_ML"   "Insufficient data"     
#> [14,] "GB_CC"       "Produced all NA scores"
#> [15,] "GB_target"   "Produced all NA scores"
#> [16,] "IT10"        "Insufficient data"     
#> [17,] "IT5"         "Insufficient data"     
#> [18,] "ITM"         "Insufficient data"     
#> [19,] "Islope1"     "Produced all NA scores"
#> [20,] "Islope4"     "Produced all NA scores"
#> [21,] "LBSPR_ItTAC" "Produced all NA scores"
#> [22,] "LstepCC1"    "Insufficient data"     
#> [23,] "LstepCC4"    "Insufficient data"     
#> [24,] "Ltarget1"    "Insufficient data"     
#> [25,] "Ltarget4"    "Insufficient data"     
#> [26,] "SBT2"        "Produced all NA scores"
#> [27,] "SPSRA_ML"    "Insufficient data"     
#> [28,] "YPR_CC"      "Insufficient data"     
#> [29,] "YPR_ML"      "Insufficient data"     
#> [30,] "DDes"        "Insufficient data"     
#> [31,] "ITe10"       "Insufficient data"     
#> [32,] "ITe5"        "Insufficient data"     
#> [33,] "LBSPR_ItEff" "Insufficient data"     
#> [34,] "LBSPR_ItSel" "Insufficient data"     
#> [35,] "LstepCE1"    "Insufficient data"     
#> [36,] "LstepCE2"    "Insufficient data"     
#> [37,] "LtargetE1"   "Insufficient data"     
#> [38,] "LtargetE4"   "Insufficient data"
Needed(Atlantic_mackerel)
#>  [1] "BK_CC: CAA"              "BK_ML: CAL"             
#>  [3] "CompSRA: CAA"            "CompSRA4010: CAA"       
#>  [5] "DBSRA_ML: CAL"           "DCAC_ML: CAL"           
#>  [7] "FMSYref: OM"             "FMSYref50: OM"          
#>  [9] "FMSYref75: OM"           "Fdem_CC: CAA"           
#> [11] "Fdem_ML: CAL"            "Fratio_CC: CAA"         
#> [13] "Fratio_ML: CAL"          "GB_CC: Cref"            
#> [15] "GB_target: Cref, Iref"   "IT10: Iref, MPrec"      
#> [17] "IT5: Iref, MPrec"        "ITM: Iref, MPrec"       
#> [19] "Islope1: MPrec"          "Islope4: MPrec"         
#> [21] "LBSPR_ItTAC: CAL, MPrec" "LstepCC1: MPrec, ML"    
#> [23] "LstepCC4: MPrec, ML"     "Ltarget1: ML"           
#> [25] "Ltarget4: ML"            "SBT2: Rec, Cref"        
#> [27] "SPSRA_ML: CAL"           "YPR_CC: CAA"            
#> [29] "YPR_ML: CAL"             "DDes: MPeff"            
#> [31] "ITe10: Iref, MPeff"      "ITe5: Iref, MPeff"      
#> [33] "LBSPR_ItEff: CAL, MPeff" "LBSPR_ItSel: CAL"       
#> [35] "LstepCE1: MPeff, ML"     "LstepCE2: MPeff, ML"    
#> [37] "LtargetE1: MPeff, ML"    "LtargetE4: MPeff, ML"

Spatial MPs and length-vulnerability MPs (class DLM_input) can be MSE tested but are a management recommendation in themselves. DLM_output MPs however can be calculate:

Atlantic_mackerel <- TAC(Atlantic_mackerel,reps=48)

and plotted using getTAC() function:

plot(Atlantic_mackerel)

8 Limitations

8.1 Idealised observation models for catch composition data

Currently, DLMtool simulates catch-composition data from the true simulated catch composition data via a multinomial distribution and some effective sample size. This observation model may be unrealistically well-behaved and favour those approaches that use these data. We (and by that I mean Adrian) is adding a growth-type-group model to improve the realism of simulated length composition data.

8.2 Harvest control rules must be integrated into data-limited MPs

In this version of DLMtool, harvest control rules (e.g. the 40-10 rule) must be written into a data-limited MP. There is currently no ability to do a factorial comparison of say 4 harvest controls rules against 3 MPs (the user must describe all 12 combinations). The reason for this is that it would require further subclasses. For example the 40-10 rule may be appropriate for the output of DBSRA but it would not be appropriate for some of the simple management procedures such as DynF that already incorporate throttling of TAC recommendations according to stock depletion.

8.3 Natural mortality rate at age

The current simulation assumes constant M with age. Age-specific M will be added soon.

8.4 Ontogenetic habitat shifts

Since the operating model simulated two areas, it is possible to prescribe a log-linear model that moves fish from one area to the other as they grow older. This could be used to simulate the ontogenetic shift of groupers from near shore waters to offshore reefs. Currently this feature is in development.

8.5 Implementation error

In this edition of DLMtool there is no implementation error. The only imperfection between a management recommendation and the simulated TAC comes in the form of the MaxF argument that limits the maximum fishing mortality rate on any given age-class in the operating model. The default is 0.8 which is high for all but the shortest living fish species.

9 References

Carruthers, T.R., Punt, A.E., Walters, C.J., MacCall, A., McAllister, M.K., Dick, E.J., Cope, J. 2014. Evaluating methods for setting catch-limits in data-limited fisheries. Fisheries Research. 153, 48-68.

Carruthers, T.R., Kell, L., Butterworth, D., Maunder, M., Geromont, H., Walters, C., McAllister, M., Hillary, R., Kitakado, T., Davies, C. 2015. Performance review of simple management procedures. ICES journal, in press.

Costello, C., Ovando, D., Hilborn, R., Gains, S.D., Deschenes, O., Lester, S.E., 2012. Status and solutions for the world’s unassessed fisheries. Science. 338, 517-520. Deriso, R. B., 1980. Harvesting Strategies and Parameter Estimation for an Age-Structured Model. Can. J. Fish. Aquat. Sci. 37, 268-282.

Dick, E.J., MacCall, A.D., 2011. Depletion-Based Stock Reduction Analysis: A catch-based method for determining sustainable yields for data-poor fish stocks. Fish. Res. 110, 331-341.

Geromont, H.F. and Butterworth, D.S. 2014. Complex assessment or simple management procedures for efficient fisheries management: a comparative study. ICES J. Mar. Sci.

MacCall, A.D., 2009. Depletion-corrected average catch: a simple formula for estimating sustainable yields in data-poor situations. ICES J. Mar. Sci. 66, 2267-2271.

Newman, D., Berkson, J., Suatoni, L. 2014. Current methods for setting catch limits for data-limited fish stocks in the United States. Fish. Res. 164, 86-93.

Restrepo, V.R., Thompson, G.G., Mace, P.M., Gabriel, W.L., Low, L.L., MacCall, A.D., Methot, R.D., Powers, J.E., Taylor, B.L., Wade, P.R., Witzig, J.F.,1998. Technical Guidance On the Use of Precautionary Approaches to Implementing National Standard 1 of the Magnuson-Stevens Fishery Conservation and Management Act. NOAA Technical Memorandum NMFS-F/SPO-31. 54 pp.