U.S. Department of the Interior[KA1] Bureau [RV2]of Land Management Assessment, Inventory, and Monitoring (AIM) Desk Guide PREPARING OFFICE U.S. Department of the Interior Bureau of Land Management National Operations Center Denver Federal Center, Building 50 Denver, Colorado 80225-0047 1.0 INTRODUCTION 4 2.4 DESK GUIDE OVERVIEW 6 1.1 AUDIENCE 7 2.3 REMOTE SENSING OVERVIEW 8 3.0 PLANNING AND PROJECT INITIATION 8 3.1 OVERVIEW 8 3.2 TOOLS: PROJECT LEADS TRAINING AND MONITORING DESIGN WORKSHEETS 9 3.3 THE FIVE BASIC STEPS TO PROJECT PLANNING AND INITIATION 9 3.3.1 Step 1: Coordinate with AIM State Lead and/or Monitoring Coordinator to discuss monitoring priorities, budget, and crew hiring options. 9 3.3.1.1 Using Remote Sensing to Inform Monitoring 10 3.3.2 Step 2: Identify Roles and Responsibilities 11 3.3.3 Step 3: Form an Interdisciplinary (ID) Team 11 3.3.4 Step 4: Develop a Monitoring Design Worksheet 11 3.3.5 Step 5: Revisit and Revise Monitoring Design Worksheet Annually 11 3.3.5.1 Remote Sensing Helps Inform MDW Revisions 12 4.0 DESIGN 12 3.2 OVERVIEW 12 3.3 TOOLS 14 3.4 THE SEVEN STEPS TO COMPLETING A MONITORING DESIGN WORKSHEET 14 4.3.1 Step 1: Develop management objectives; select additional ecosystem attributes and indicators to monitor 14 4.3.1.1 Step 1a: Develop management objectives or goals related to resource condition and resource trend 14 4.3.1.2 Step 1b: Select additional ecosystem attributes and indicators to monitor 15 4.3.2 Step 2: Set study area and reporting units; develop monitoring objectives 16 4.3.2.1 Step 2a: Set the study area, reporting units, define the target population, document the geospatial layers used to describe these areas, and select the existing sample designs to be used for revisits 16 4.3.2.2 Step 2b: Develop monitoring objectives related to resource condition and resource trend 17 4.3.3.1 Remote Sensing Informs Stratification 22 4.3.4 Step 4: Select and document supplemental monitoring methods; estimate sample sizes; set sampling frequency; develop implementation rules 22 4.3.4.1 Step 4a: Select and document supplemental monitoring methods (optional/if required) 22 Return to step 1b and ensure your supplemental indicator and methods will provide the specific data needed to address the management question(s). 23 4.3.4.3 Step 4c: Define revisit parameters (Use the Revisit Frequency Table to document decisions made in this section) 24 4.3.4.4 Step 4d: Develop implementation rules 26 4.4.6 Step 6: Apply stratification and select monitoring locations 28 4.3.6 Step 7: Data management plans 29 5.0 DATA COLLECTION 30 5.1 OVERVIEW 30 5.2 TOOLS 31 5.3 THE FOUR STEPS TO DATA COLLECTION 31 5.3.1 Step 1: Preparation 31 5.3.1.1 Personnel and Equipment Prep 31 5.3.1.2 Point Evaluation and Rejection 32 5.3.1.4.1 Monitoring Design 33 5.3.1.4.2 Trip Planning 34 5.3.2 Step 2: Field Methods Training 34 5.3.3.1 Field Sampling 35 5.3.3.2 Electronic Data Capture and Data Management 35 5.3.4 Step 4: Data QC and Ingestion Prior to Data Use 36 5.4 USING REMOTE SENSING TO EVALUATE CRITICAL CONCEPTS OR AN ADDITIONAL LINE OF EVIDENCE 37 6.0 APPLYING AIM DATA: ANALYSIS AND REPORTING 37 6.1 OVERVIEW 37 6.2 TOOLS 37 6.3 THE NINE STEPS TO THE STANDARD AIM DATA USE WORKFLOW 37 6.3.1 Preparing for an Analysis 38 Step 1: Identify Management goals and Land Health Standards to be evaluated. 38 Step 2: Obtain Available Data Within the Area of Interest 39 Step 3: Select Indicators for Evaluating Goals 40 Step 4: Set Benchmark Values or Define Condition Categories 41 Step 5: Identify Relevant Plots/Reaches and Assign Benchmark Groups 42 6.3.2 Conducting an Analysis 42 Step 6: Apply Benchmark Values and Document Which Plots Achieve Benchmarks 42 Step 7: Determine Appropriate Analysis 43 Single Point Analyses 44 Un-Weighted vs. Weighted Analysis 45 Un-Weighted Analysis 45 Weighted Analysis 46 Other Types of AIM Analysis 47 Analyzing Trend 47 Causal Analyses 48 Remote Sensing Analyses 48 6.3.3 Interpreting Results 50 Step 8: Communicating Results 50 Data Visualization 50 Boxplots and histograms 50 Points and error bars 50 Stacked bar plots 51 Maps and photos 52 Dashboards and StoryMaps 55 Step 9: Decide whether management goals have been met 56 Using Multiple Lines of Evidence 56 7.0 GLOSSARY 56 8.0 LITERATURE CITED 67 9.0 TABLES 70 TABLE 1: AIM RELATED POLICY SUMMARY – HOW AIM SUPPORTS THE BLM MISSION 70 10.0 FIGURES 73 11.0 PHOTOS 73 12.0 APPENDICES 73 APPENDIX A: ROLES AND RESPONSIBILITIES 73 APPENDIX B. SETTING BENCHMARKS 77 Policy 78 Reference Conditions 79 Peer Reviewed Articles 84 Best Professional Judgment 84 APPENDIX C: SAMPLE SUFFICIENCY TABLES 90 AIM SAMPLE SUFFICIENCY TABLES 90 EXAMPLE 90 List of Acronyms/Abbreviations Acronym or Abbreviation Full Phrase AIM Assessment, Inventory, and Monitoring BLM Bureau of Land Management DIMA Database for Inventory, Monitoring, and Assessment IDT Interdisciplinary Team[NS3] MDW Monitoring Design Worksheet NOC National Operations Center NAMC National Aquatic Monitoring Center 1.0 Introduction In 2004, the Office of Management and Budget (OMB) reviewed the BLM’s monitoring budget and found that BLM was unable to report on the national condition of public lands with available data. They recommended that BLM examine their monitoring activities and develop an approach which would enable reporting beyond the individual project level. Building on a detailed monitoring program review, BLM developed the AIM Strategy to supply monitoring data that could be used at multiple scales and across multiple programs. The BLM’s AIM program (https://www.blm.gov/aim/stategy) enables the Bureau to “prepare and maintain on a continuing basis an inventory of all public lands and their resource and other values” as required by Federal Land Policy and Management Act (Sec. 201a) (FLPMA). The AIM Strategy provides a standardized approach for measuring natural resource condition and trend of BLM public lands by providing quantitative data and tools to guide and justify policy actions, land uses, and adaptive management decisions. AIM data and analysis products address the health of upland rangelands (terrestrial), rivers and streams (lotic), and riparian and wetland areas. For BLM, the AIM program data provide endless possibilities to understand and tell the story of our landscapes and land management efforts while meeting monitoring requirements Protecting and improving land health requires comprehensive landscape management strategies. Land managers have embraced a landscape-scale philosophy and have developed new methods to inform decision making such as development of the AIM program. The AIM Strategy seeks to reach across programs, jurisdictions, stakeholders, and agencies to provide standardized information to inform management decisions (Toevs et al 2011). The BLM is responsible for the management of about 245 million acres of public land for a variety of uses, including livestock grazing, energy development and reclamation, wildlife habitat, timber harvesting, and outdoor recreation, while conserving natural, cultural, and historical resources. The AIM dataset is one of the largest available datasets to inform resource management decisions on BLM managed public lands. This desk guide serves as a guide for implementing an AIM project and describes steps from project initiation to data analysis and use.  Through the development of the AIM Strategy, the BLM has gained a standard, quantitative approach to assessing the health of terrestrial, lotic, riparian and wetland ecosystems and informing management of public lands. AIM data represent a reliable, high-quality data source that meets CEQ and Data Quality Act Guidelines. AIM data may be used alone or in conjunction with other types of data in a multiple lines of evidence approach to inform decision-making. For example, AIM data can provide a snapshot of current conditions and means of tracking resource changes over time across the Land Use Planning area or NEPA project area, while other data may provide more detailed analysis of specific areas and land uses (e.g., land health evaluations and determinations, land treatment and restoration effectiveness reports). Assessments such as Interpreting Indicators of Rangeland Health (TR 1734-6), Proper Functioning Condition (TR 1737-15 and TR 1737-16), or Greater Sage-Grouse Habitat Assessment Framework (TR 6710-1) should be used to augment the status and trend information and should incorporate AIM indicators and methods to complete the assessments, when possible. In addition, other high-quality information that describes terrestrial, lotic, and riparian and wetland condition, including satellite-derived maps, can be used to inform the assessment. 2.1 AIM Strategy – The Six Principles[NM4] The AIM Strategy was developed to provide decision makers timely and quantitative data and information at multiple spatial scales to assist in adaptively managing public lands at the agency-wide level as an alternative to developing monitoring programs for each specific use. The goal of the AIM strategy is “to provide the BLM and its partners with the information needed to understand...resource location and abundance, condition, and trend, and to provide a basis for effective adaptive management.” (Kachergis et al 2022; Toevs et al 2011).[NM5] Resource information is needed at multiple scales to manage public lands effectively. This includes gathering information about resource extent, condition and trend, stressors, and the location and nature of authorized uses, disturbances, and projects. To accomplish acquisition and assessment of this information, the AIM Strategy integrates six fundamental principles which include: 1. Structured implementation to guide monitoring program development, implementation, and management decisions. 2. A standard set of quantitative indicators and methods to allow data comparisons throughout the BLM and in collaboration with BLM partners. 3. Appropriate sample design to minimize bias and maximize what can be learned from collected data. 4. Integration with remote sensing to optimize sampling and calibrate continuous map products. 5. Standardized electronic data capture, centralized data management and national stewardship to ensure data quality, accessibility, and use. 6. Standard workflows and analysis frameworks for using data.[KJ6] AIM Data and Use[NM7] As of 2023, standardized AIM data are available at more than 50,000 terrestrial, 4,500 lotic and 400 riparian and wetland monitoring locations from Alaska to New Mexico. These data can be visualized on the AIM data portals (links for public and internal data access) Example applications of AIM data include:[KEJ8][YJ9]  1. Evaluating the attainment of BLM land health standards (43 CR 4180.1). 2. Informing grazing permit decisions (BLM Instruction Memorandum 2009-007). 3. Tracking the spread of invasive species and prioritizing treatment areas. 4. Assessing reclamation and restoration treatment effectiveness, including after fires. 5. Assessing habitat conditions for species of management concern (e.g., Greater Sage-grouse, native fishes, and mule deer habitat).  6. Determining the effectiveness of, and adaptively managing, land use plans (Land Use Planning Handbook H-1601-1; BLM Instruction Memorandum 2016-139). 7. Assisting in the completion of national, regional, and state-based assessments to prioritize restoration, conservation, and permitted uses. Addition examples of AIM data use can be found on the AIM website (https://www.blm.gov/aim/strategy) and on the BLM-AIM SharePoint (for DOI users) 2.4[KA10] Desk Guide Overview The BLM National AIM Team provides and maintains this desk guide for use by BLM AIM State Leads, State Monitoring Coordinators, Project Leads, and other AIM practitioners involved in the process of data collection, data use, and management on public lands. This Desk Guide describes the process to implement the six principles of the AIM strategy by providing guidance on how to initiate and plan a monitoring project, how to create a monitoring design, how to perform data collection and management, and lastly suggests a standard workflow for data use. 1.1[KA11] Audience This AIM Desk Guide can be used by a large audience of AIM practitioners from the local, state, and national levels. The target audience includes State Leads, Project Leads, Monitoring Coordinators, field office specialists, interdisciplinary teams, field and district managers, National AIM team members, and headquarters personnel. A comprehensive list of roles and responsibilities associated with these practitioners can be found in Section 9.0, Appendix A: Roles and Responsibilities. General Concepts Introduction[NM12] There are some general concepts that users will encounter many times throughout the AIM process. These are briefly introduced and summarized here and will be discussed in more detail throughout the desk guide. Management goals[NM13] Our land management actions are Monitoring objectives[NM14] Monitoring objectives should be developed from management goals and will shape data collection efforts and subsequent utility of the data in analysis. Benchmarks[NM15] Benchmarks help provide context to the data and benchmark development is an ongoing process that happens concurrently with data collection. Benchmark development should be accomplished for any project area. Benchmarks are values (or ranges of values) for a given metric or indicator that establish desired conditions and are meaningful for management. Benchmark development is an iterative process; at the outset, benchmarks are developed from any existing information such as primary literature or existing monitoring data, and values are revised as more data are collected to inform our understanding of condition. Benchmarks are important for any monitoring effort to provide useful analysis for informed decision making. The AIM strategy uses benchmarks at various stages of the process, notably during the Design and Analysis and Reporting steps. Benchmarks are used to determine if observed indicator values at assessed points (i.e., monitoring reaches or plots) fall within the range of desired conditions. Lack of benchmarks can complicate monitoring data interpretation. For example, achieving a plant density benchmark value following a seeding treatment may indicate that the project was successful, while failure to meet the benchmark may trigger reevaluation of the seeding methods. Conversely, observed electrical conductivity (EC) values characterize the amount of cations and anions dissolved in stream water at a monitoring location, but without appropriate benchmarks, these observed values cannot be used to assess condition or the attainment of management objectives Integration with remote sensing[NM16] Remote Sensing occurs in multiple steps of the AIM workflow and has evolved to include more than just sampling optimization and calibrated mapping products. Remotely sensed data offer a broad scale complement to field surveys to develop a more holistic picture of resource extent, conditions, and trends. Passive remote sensing sensors, including hyper- and multi-spectral imagery (Landsat, Sentinel-2, EO-1 Hyperion), provide information on surface reflectance at various points along the electromagnetic spectrum. These data can be used to differentiate surface cover, including various vegetation functional groups, and characterize vegetation health. For example, decreased reflectance in the near-infrared range (~850 nm) and increased reflectance in the short-wave infrared range (~1500-2000 nm) are indicative of stressed vegetation. Active remote sensing, including Lidar and Radar, can provide information on the structure and physical characteristics of the surface. For example, the number, range, and distribution of non-ground points in a lidar dataset can quantify the structure and density of surface vegetation. Importantly, the data archive of these sensors continues to grow (e.g., a Landsat satellite has been in continuous orbit for more than 50 years), allowing for longer-term change detection and trend analysis. At the same time, a growing number of small satellite constellation provides greater spatial and temporal resolution (e.g., Planet Labs), some with daily revisit, providing a novel perspective on phenology and surface change monitoring. While remotely sensed data cannot yet provide the level of detail achievable in a field survey, we can model the relationship between remotely sensed data (e.g., reflectance) and field surveys to predict the occurrence of species more broadly. This is the guiding principle behind the Landscape Cover Analysis and Reporting Tools (LandCART) platform, where a statistical relationship is developed between AIM field data and Landsat reflectance values, then used to predict fractional vegetation cover range-wide or back in time. When studying lotic systems, remotely sensed data can help identify watershed-level disturbances (e.g., the extent and severity of wildfires) that could drive hydrological changes and turbidity. Remote sensing can also help in mapping the extent and variability in riparian and wetland systems. The spectral signature of water differs markedly from water and soil, and Synthetic Aperture Radar (SAR) data is adept at distinguishing open water, submerged vegetation, and land even in cloudy, unfavorable conditions that would prevent the collection of multispectral data. Taken together, remotely sensed data offer a complement to vector geospatial data and field surveys to form a more complete picture of ecosystem characteristics and change. 2.3 Remote Sensing[SL17][SL18][SL19] Overview[RV20] Planning and Project Initiation 3.1 Overview Planning and implementing an AIM project can be simple or complex depending on the needs and scale of the monitoring effort. This section discusses the five basic steps to plan and implement an AIM project and identifies specific people who will be involved and their roles and responsibilities in the process. Note that many offices may have multiple ongoing AIM efforts, across different resources (terrestrial, lotic, and/or riparian and wetland) and different scales (Land Use Planning, allotment, or treatment scale). We recommend learning about what monitoring has already occurred in your field office or district office to provide context for planning continued monitoring. 3.2 Tools[RV21][RV22][CN23][YJ24][RV25]: Project [KEJ26]Leads Training and Monitoring Design Worksheets [add text here – what is the context?] 3.3 The Five Basic Steps to Project Planning and Initiation 3.3.1 Step 1: Coordinate with AIM State Lead and/or Monitoring Coordinator to discuss monitoring priorities, budget, and crew hiring options. The first step in planning and initiating AIM efforts is to identify monitoring priorities and avenues for funding AIM efforts in coordination with the appropriate Field, District, and State Offices. Review Table 1: AIM-Related Policy Summary – How AIM Supports the BLM Mission (Section 9.0 – Tables) for more information about how AIM is integrated with other BLM program work. Once monitoring priorities are identified, funding request(s) should be submitted to the program(s) that benefit from the monitoring work. Budget submission processes vary by office and program; talk to your AIM State Lead and/or State Monitoring Coordinator for support in making the appropriate budget requests. Given that AIM is a standard dataset that informs questions shared across multiple programs (e.g., Land Health standards attainment), it is often possible to pool resources from multiple programs and thus gain efficiencies in completing monitoring work. Data can be collected by dedicated AIM crews, Project Leads, specialists, or other field office staff. Typically, the majority of AIM data are collected by dedicated field crews. If choosing to hire crews, work with the AIM State Lead and/or Monitoring Coordinator and the local Field or District office management to explore hiring options and establish timelines for the project. Crew hiring can be accomplished through a variety of mechanisms including contracting through the AIM IDIQ, through BLM seasonal hiring, or through use of assistance agreements.  Hiring costs depend greatly on the hiring mechanism and local factors. Work with your AIM State Lead or Monitoring Coordinator to estimate costs while considering the type of data collectors that will be utilized. See section 5.3.1.1 for more detailed information about hiring data collectors. Refer to Figure 1, Yearly AIM Implementation Calendar, for general timelines of each phase of AIM implementation. Figure 1. The Yearly AIM Implementation Calendar outlines the general time of year when each phase of AIM Implementation should be completed. [KA27][RV28] 3.3.1.1 Using Remote Sensing to Inform Monitoring In addition to field data, consider available remote sensing data sources to inform monitoring planning and to provide needed indicators for land management. For example, many monitoring applications use remote sensing products to locate monitoring sites within areas of interest. Remote sensing products provide indicator estimates that may help with project planning such as total plant foliar cover and bare ground across landscapes and through time. Breakout box text: Remote sensing products can be useful to identify disturbance and changes in vegetation functional group composition. For example, fire or vegetation defoliation as a result of water stress or insect/disease outbreaks may be detected using remotely sensed vegetation metrics such as increases in the Normalized Burn Ratio (NBR, a ratio of near-infrared to short-wave infrared reflectance), decreases in the normalized difference vegetation index (NDVI, a ratio of near-infrared to red reflectance), or decreases in VH backscatter in SAR imagery c.  Remotely sensed data can also be used to model the abundance of functional groups such as annual grasses through products like the LandCART platform.  This could identify areas that would benefit from additional field monitoring such as those increasingly dominated by invasive species. 3.3.2 Step 2: Identify Roles and Responsibilities The second step in the process is identifying the roles and responsibilities for each member or group involved in an AIM project, which is essential to the success and longevity of any monitoring effort. Individuals and groups involved in an AIM project include: (1) National AIM team members; (2) the AIM State Lead and/or Monitoring Coordinator; (3) Field Office AIM Project Lead; (4) the Interdisciplinary Team (IDT); and (5) the data collection crew. Visit Roles and Responsibilities (Appendix A) for detailed descriptions of these roles. 3.3.3 Step 3: Form an Interdisciplinary (ID) Team The AIM strategy is intended to be used across programs and resources. Project Leads are encouraged to collaborate with other resource specialists in their office to begin planning workload, funding, and identifying monitoring goals and objectives for Lotic, Terrestrial, and Riparian & Wetland AIM efforts. This process ensures engagement across the district or field office and that monitoring is meeting the needs of all stakeholders. The IDT is also an essential group for establishing benchmarks used during design, analysis, and reporting steps. It is recommended that the team collaborate on the process of setting benchmarks concurrent to any AIM effort. This collaboration will ensure monitoring designs will answer monitoring objectives, design creation can proceed quickly, and data can be analyzed efficiently. 3.3.4 Step 4: Develop a Monitoring Design Worksheet The Monitoring Design Worksheet (MDW) is a template used to guide and document objectives to develop successful monitoring efforts. The MDW communicates design specifications to the National AIM Team for creating a monitoring design (if a random design is appropriate) and to other BLM personnel to learn more about local AIM monitoring. Completing the worksheet is essential when planning a design but will also inform subsequent data analyses. Section 4.3 will cover the steps to complete a Monitoring Design Worksheet: specifically, how to properly track Management goals, Monitoring objectives, specific data needs, the appropriate sample design to employ, and all design specifications details.  3.3.5 Step 5: Revisit and Revise Monitoring Design Worksheet Annually Once created, MDWs should be reviewed and revised annually. We recommend reviewing and revising monitoring design worksheets on an annual basis. Monitoring in an adaptive management framework requires that data are analyzed regularly to inform management, and that monitoring objectives and sample designs be reviewed and adjusted as needed. Reviewing data that were collected and whether the data are addressing monitoring objectives can help practitioners decide if adjustments to the monitoring approach are necessary. In addition, it’s a good way to get new IDT members up to speed on ongoing AIM efforts. 3.3.5.1 Remote Sensing Helps Inform MDW Revisions The first step in the monitoring design worksheet is to develop management objectives or goals related to resource conditions and trends. A major disturbance in a Land-Use Planning design study area may trigger new management objectives for that area that may necessitate a short- or long- term modification to implementation of the original sample design or even the creation of a project level design. Remote sensing can be used to inform the planning process.  For example, while vector fire perimeters can identify field sampling plots that may have burned, a burn severity image can provide more detail on the extent and severity of fire to the landscape, as well as whether monitoring plots were directly affected.  Remotely sensed data can also identify other surface disturbances that are often not captured in other vector geospatial datasets.   [KEJ29][NM30][YJ31][RV32] [SMR33] 4.0 Design 3.2 Overview[PLJ34][CN35] When starting a new or reviewing an existing AIM sample design, consult with an interdisciplinary team to identify specific management questions and objectives of interest. Once these questions are identified, an appropriate monitoring plan can be developed to make necessary technical decisions via an AIM sample design. The process of designing a monitoring and assessment effort can be broken down into a series of steps (Figure 3[KA36][CN37][YJ38]; Steps 1-7). The steps are listed in the order that they are normally completed, but there is no “single” way to design a monitoring program; the steps should be viewed as an iterative process. As a Project Lead and the National AIM Team, work through steps in the design process, decisions made earlier in the process may require modification. As the IDT goes through the design process, complete the corresponding Monitoring Design Worksheet found in the appendix which provides a step-by-step template for creating a BLM AIM sample design. A Monitoring Design Worksheet template can also be found on the AIM SharePoint site. [NAM39][CN40][YJ41] [NAM42][CN43][CN44][CN45][CN46][CN47][YJ48][RV49] Figure 3: Monitoring of core methods program design, implementation, and integration with management Each step in the sample design process (Figure 3) is tied to a step in the Monitoring Design Worksheet (MDW). Once a MDW is drafted, the Project Lead should coordinate with the resource the appropriate state lead and National AIM Team to review and update as necessary. Completion of the worksheet is an iterative process, and it can be revised and updated throughout the life cycle of an AIM project.    To request further assistance, contact the appropriate resource personnel at the BLM National Operations Center.  3.3 [CN50][YJ51]Tools [NS52]Monitoring Design Worksheet Template Example Monitoring Design Worksheet 3.4 The Seven Steps to Completing a Monitoring Design Worksheet[CN53][YJ54][KEJ55][LTAC56][LTAC57] Step 1: Develop management objectives or goals; select ecosystem attributes and indicators to monitor Step 1a: Develop management objectives or goals related to resource condition and (if necessary) resource trend Step 1b: Select ecosystem attributes and indicators to monitor Step 2: Set the study area and reporting units; develop monitoring objectives Step 2a: Set the study area, reporting units, define the target population, document the geospatial layers used to describe these areas. For revisit designs select the existing sample designs to be used for revisits. Step 2b: Develop monitoring objectives related to resource condition and (if necessary) resource trend Step 2c: Refine the target population using monitoring objectives Step 3: Select criteria for stratifying the study area (as appropriate) Step 4: Select and document supplemental monitoring methods; estimate sample sizes; set sampling frequency; develop implementation rules Step 4a: Review and document supplemental monitoring methods (if required) Step 4b: Estimate sample sizes (Completed by National AIM Team) Step 4c: Define revisit parameters (revisit designs only) Step 4d: Develop implementation rules Step 5: Collect and evaluate available data to determine sample size requirements Step 6: Apply stratification and select statistically appropriate monitoring locations Step 7: Develop quality assurance and quality control (QA and QC) procedures and data management plans 4.3.1 Step [CN58]1: Develop management objectives; select additional ecosystem attributes and indicators to monitor 4.3.1.1 Step 1a: Develop management objectives or goals related to resource condition and resource trend One of the first and most important steps in the AIM process is identifying the management goals of a monitoring effort. Management goals should provide the context for why monitoring information is needed and how it will be used. [PLJ59][CN60] After gaining management approval, assemble an IDT to review existing documents which describe management history, planned management actions, previous data collection efforts, and relevant policy. Some examples of documents that should be included in this review are listed below: • BLM Land Health Handbook (4180) • Land Health Standards o Ecological processes o Watershed function o Water quality and yield o T&E and native species • Sage grouse habitat management objectives • Resource Management Plans • Commitments in NEPA documents or biological opinions Based on this review, consider what management goals[ML61][KA62][NM63][ML64] the IDT [NS65]should synthesize. Provide citations to the relevant supporting background documents. Since many of these documents relate back to the Land Health Standards for the area, Land Health Standards are a good place to start. Then add management objectives not covered by Land Health Standards as needed (e.g., goals related to resource trend). During this step, it is helpful to think broadly across programs and jurisdictions to identify the desired conditions in the landscape of interest. Multiple management goals should be addressed by a monitoring effort but should also be balanced with available resources (e.g., [PJ66][CN67]sample points, crews, funding). Identifying all management goals at the initial planning stage can increase efficiency in sampling efforts by ensuring the necessary data required to address all management objectives is collected in a single site visit.[CN68][NM69] ADD TREND GOALS and discuss need for revisit design based on whether trend goals would best be met using revisit points or whether they can be met using landscape trend level information 4.3.1.2 Step 1b: Select additional ecosystem attributes and indicators to monitor[ML70][LTAC71] The core and contingent methods [NS72]were selected because they are relevant across BLM managed ecosystems and the data they collect can be used to address many BLM monitoring and assessment requirements, including Land Health Standards. For example, vegetation cover and composition data might be useful to address habitat, grazing, and fire recovery objectives. Review and select[CN73][DCJ74][CN75][NM76][ML77][CN78][KA79][KA80][RV81] monitoring methods that relate to management goals. In most cases, for LUP/RMP designs, all core methods should be collected. (The only exception is that, for terrestrial points, Project Leads may choose not to recollect Soil Characterization data after the first visit.) For other types of designs, it is encouraged but not required that all core methods be collected. Core methods that will be collected at these other design points should be selected to meet monitoring objectives and should be clearly communicated to the appropriate Resource Data Manager on the National AIM Team. If there are management goals which will not be satisfied by indicators that can be calculated from the core and contingent methods[KA82][NM83], consider adding supplemental methods (s[DJ84][CN85][NM86][CN87][NM88]ee section 4.3.4.1 [NS89]for more information).[NS90][RV91] 4.3.2 Step 2: Set study area and reporting units; develop monitoring objectives 4.3.2.1 Step 2a: Set the study area, reporting units, define the target population, document the geospatial layers used to describe these areas, and select the existing sample designs to be used for revisits First, identify the study area or geographic extent of the resource to report on[CN92].[KA93] The study area should include the entire landscape area or extent of the resource that will be monitored to meet management goals. Some common study areas are field offices, grazing allotments, watersheds, habitat areas or streams.  Next, determine the desired reporting units (e.g., grazing allotment, watershed, field office, district, state). Reporting units are the geographic areas for which indicator averages and error estimates will be computed and thus minimal sample sizes are required. Reporting units are typically nested within the study area, but depending on the management goals, the reporting unit and the study area can be the same. Generally, reporting units are administrative areas where AIM data need to be summarized for a particular analysis. In contrast to strata[ML94], reporting units can be defined at any point during an AIM project life cycle and do not affect how AIM data is collected. The number of acres (terrestrial or riparian and wetlands) or stream kilometers (lotic) in each of the reporting units are documented in step 3 (section 4.3.3). Define the target population[NS95]. The target population (or sample frame) refers to the overall resource being monitored. A target population must be limited to only places where data will be collected and fall entirely within the study area. This contrasts with a study area which may include parts of a landscape that will not be sampled, e.g. a watershed as a study area may include privately-owned land, but the target population would not.  Sample points are selected from within the target population. The definition of the target population should contain specific information about the resource of interest: its spatial extent, ownership status, and size (e.g., all streams or just first order streams?). Examples of the target population include: all BLM lands within a study area, all perennial, wadeable streams on BLM land, all riparian and wetland areas on BLM land, or sage grouse habitat on BLM lands. (Herrick et al. 2017).   Once the study area, reporting units, and target population are established, document the geospatial layers used to delineate these polygons. When creating sample designs in study areas that contain existing or historical sample designs, use the same layers or consolidate layers along the same perimeter lines that were used to generate points in the original sample design for terrestrial, lotic, and riparian and wetland resources[NS96][LTAC97]. Lotic sample designs may refer to the master sample [PLJ98][KA99][ML100](See Appendix D for more information on the master sample). Information about the number of acres (terrestrial) or stream kilometers (lotic) in the study area will be added in step 3 (section 4.3.3).    Lastly, if developing a revisit design in collaboration with the National AIM team, additional information may be required to create an appropriate revisit design such as which existing sample points will be incorporated into the new sample design. (see section 3.3.5 and 4.3.4.3 for more information about revisit designs).[KA101][YJ102]  EXPAND this conversation slightly[PLJ103][KA104][ML105] 4.3.2.2 Step 2b: Develop monitoring objectives related to resource condition and resource trend During this step, fill out the Resource Condition and Trend Objectives Tables[PLJ106][PLJ107][PLJ108][KA109][KA110][ML111][LTAC112].[NS113][PLJ114] Monitoring objectives are quantitative statements that provide a means of evaluating whether management goals[NS115] were achieved. Monitoring objectives should be specific, quantifiable, and attainable based on ecosystem potential, as well as resource availability, and the sensitivity [PJ116]of the methods. Quantitative monitoring objectives may be available in a variety of places (see section 3.5) or they may be developed in the monitoring planning process. Objectives guide [DCJ117]how and where [CN118]to focus sampling efforts so that there are sufficient data to address management goals and ensure sampling designs are meeting the project needs. While many projects take place across a large area (e.g., within the boundary of a LUP, RMP, Field Office, or District Office), sample designs can also be created for much smaller projects such as restoration treatment areas. [PJ119]If more points are needed in specific areas, targeted points can be utilized or an intensification sample design can be drawn over part of an existing sample design to ensure that enough necessary information can be obtained within those areas.  [CN120] Begin by [PLJ121]listing management goals (from step 1) in Column 1 of the Resource Condition and Trend Objectives Tables. While filling out the table, each management goal should have one or more corresponding monitoring objectives. Projects with differing objectives among reporting units will need to complete separate Resource Condition and Trend Objectives Tables for each reporting unit (see section 4.3.2.1[NS122]).  [PLJ123][CN124][KA125][NM126][RV127]   At a minimum, monitoring objectives should include: 1. the indicator(s)[NS128] that will be monitored; 2. quantitative benchmark(s) for [PLJ129][CN130]each indicator and  3. for inference beyond the plot or reach-scale, the proportion of the resource that is required to meet the benchmark.  The most robust monitoring objectives also clearly identify the reporting units, a time frame for evaluating the indicator(s), and the desired confidence level (e.g., 90% confidence) in the objective.   [DCJ131] Resource trend objectives are used to describe the desired change in indicator values over a specified time period. These may include short-term objectives (e.g., evaluating recovery of a study area following a disturbance) or long-term objectives. At a minimum, select an indicator(s) and the related measurement units for each management goal, the desired direction of change (increase, decrease, or no change), and the time period for assessing change. The time period for assessing change could be the amount of time following or preceding a particular event (e.g., change in management or a disturbance); a comparison between two time periods (e.g., 2015-2019 compared to 2020-2024), or a fixed interval (e.g., trend over the next 10 years). For robust trend analyses it is beneficial to specify the magnitude of desired change – this is equivalent to a benchmark for trend and is the specific amount or range that the indicator should change to meet stated objectives.  TIE THIS CONVERSATION TO REVISIT DESIGN NEEDS When considering an area with multiple monitoring locations, some amount of failure to achieve a benchmark is often acceptable. Natural events such as floods, droughts, fire, and disease result in natural variability across a landscape. For this reason, monitoring objectives also include the proportion of the landscape that is required to meet a given benchmark. For example, achieving a benchmark density of plants on 80% of seeded acres can indicate success, even if 20% of the acres did not meet the benchmark value. If monitoring information shows that an insufficient amount of the resource has met a benchmark, then management actions will be triggered. The IDT should document benchmarks, benchmarks sources, and the proportion of the resource that is required to meet the benchmarks for each indicator of interest in columns 3-5 of the Resource Condition and Trend Objectives Tables. This exercise will quickly reveal [DCJ132]indicators which will require professional judgement, the development of ecological site descriptions, or other resources to aid in future data interpretation. Benchmarks, along with associated required landscape proportions, provide a way to objectively operationalize policy statements such as: “take appropriate action” to make “significant progress toward fulfillment” of land health standards. Example Monitoring objectives, with Benchmarks and Required Proportions[CN133] • Soils Land Health Standard: In the grazing allotment, maintain soil aggregate stability of 4 or greater on 80% of lands with 80% confidence over 10 years. • Watershed Function within Land Use Plan Area: Maintain bank stability of greater than or equal to 75% for 80% of perennial wadeable streams in the planning area with 95% confidence over 10 years. • Sage Grouse Habitat within Land Use Plan Area: In all SFA and PHMA, the desired condition is to maintain all lands ecologically capable of producing sagebrush (but no less than 70%) with a minimum of 15% sagebrush cover or as consistent with specific ecological site conditions over 5 years. 4.2.2.3 Step 2c: Refine the target population using monitoring objectives Identify whether the target population [LTAC134]needs modification to meet the monitoring objectives. For instance: Does the target population need to be larger? Should areas be excluded? Are there areas that will need more intense sampling achieved through stratification (see section 4.3.3[NS135])? Targeted points may be placed on existing locations such as when re-visiting existing design points out of sequence or when visiting a long-term trend point or new locations may need to be identified. For new targeted points, identify the ruleset for locating the points. Some common rulesets include, manually placing a point on a map within a feature of interest or placing points in the field based on the presence of a feature of interest that can’t be identified from the office (e.g., a sage-grouse nest). It is worth noting here the explanation of random points. These designs randomly distribute points throughout the sample frame or design area in a spatially balanced manner, considering pre-determined strata. This type of design allows us to make inference to the entire area for which the design was created and avoids the bias created from selecting locations of interest. When random designs do not sufficiently address the management goals (e.g. desire to understand grazing impacts in a certain allotment), it is an option to identify targeted points to be sampled. These points can be valuable for addressing location specific monitoring questions. However, these points cannot make inference beyond the area sampled.[ML136] 4.3.2.4 Remote sensing can help inform monitoring objectives Back-in-time [KEJ137]fractional vegetation cover datasets, such as those that can be generated with LandCART, can inform the development of meaningful target benchmarks. While far from being baseline monitoring data, these back-in-time data are often the best indicator of site potential and change, especially where no on-the-ground monitoring has been completed.  By comparing key AIM indicators over the past several decades, one can identify the degree to which the vegetation community has become degraded, as well as whether different sites have diverged in extant vegetation but share similar ecological potential. Moreover, these datasets may offer a reasonable target benchmark for restoration after a more recent disturbance, such as oil and gas development or fire.  4.3.3 Step 3: Select criteria for stratifying the study area (if necessary) Identify whether strata are necessary and, if so, which strata [PLJ138][CN139]will be used for the sample design and begin filling out the Sample Design Table. Identify which strata will be utilized, how many sample points will be collected in each, and the amount of resource that will be represented by each stratum.   Stratification can be used to distribute sample points across the landscape or resource and/or to ensure that areas of interest, including reporting units, are sufficiently sampled (i.e., have adequate sample sizes for reporting). Stratification is not required and must be justified[NS140]. Each stratum receives its own allocation of points, typically but not always in proportion to the relative size of the stratum. This means that each stratum is guaranteed to be sampled, even if it is a small portion of the target population. Stratification considers properties of the study area like physiography, management boundaries, ownership, or other attributes of the resource that need to be described to meet the monitoring objectives. Stratification decisions should be captured in the Sample Design Table.   The design process will typically start with the creation of a simple, unstratified design across a broad area (e.g., LUP/RMP). The “draft” design will then be reviewed by the Project Lead and IDT to determine if the design is adequate or if different point allocations are necessary in certain areas. If more points are needed in specific areas, stratification may be used or an intensification can be added to the design in the future to ensure that enough necessary information can be obtained within those areas.   Additional strata may be included in the design if deemed necessary. However, adding strata should be done with considerable thought, as sample sizes, required resources, and the complexity of data analysis increase with each additional stratum. Additional stratification or point allocation approaches include but are not limited to:   • Resource Management Plan boundaries   • Strahler stream order categories   • Habitat areas for sage grouse or other species of special concern such as T/E fish species Terrestrial Designs The general recommendation for terrestrial monitoring designs is to minimize stratification; either forgoing stratification or utilizing as few strata as are required to meet design objectives. New terrestrial monitoring designs are frequently not stratified. If stratification is necessary, the recommendation is to stratify by physiographic properties. Physiographic properties are not typically used as reporting units. Stratifying by physiographic properties can help allocate sample points to underrepresented or more variable portions of the landscape without sacrificing the ability to describe the whole landscape.   If strata will be used, Project Leads are asked to provide[NS141] a polygon shapefile of the strata with an attribute field containing the stratum name. These polygons should be clipped to the target population, typically a combination of the study area and most current BLM ownership layer. If a design is stratified, the stratum count should be combined to keep the design simple and manageable. For example, combine all LANDFIRE biophysical setting (BpS) groups* groups which are dominated by Wyoming Big Sagebrush into a single stratum. Document these groupings in the Stratification Lookup Table found in the appendix of the Monitoring Design Worksheet.  If any strata are less than 3,000 acres or 1% of the study area, it is recommended that they be grouped with other strata so that the resulting stratum is greater than 3,000 acres or 1% of the study area.  If several polygons are grouped to obtain the final strata, be sure to document how those decisions were made, and which polygons were combined to create the groups. * Early terrestrial designs (and, as a result, some current designs) were typically stratified by BPS groups, a remote sensing-derived layer that is conceptually very similar to the Natural Resources Conservation Service (NRCS) Ecological Sites but is available as a continuous and consistent layer across the western US and was used in the now-retired master sample. BpS groups represent natural vegetation potential on the landscape based on biophysical environment and historic disturbance regimes. Ecological Sites and watersheds have also been used to stratify terrestrial designs.    Lotic Designs   The general recommendation for stream and river monitoring designs is to limit the use of strata unless minimum sample sizes are insufficient to report on specific areas or species of interest. However, all designs will be stratified by Strahler Stream Order, grouped into three categories with grouping depending on location:   • Lower 48: small streams (1st and 2nd order), large streams (3rd and 4th order), and rivers (5th order and above). • Alaska: small streams (1st order), large streams (2nd and 3rd order), and rivers (4th order and above).      If any of the stream or river strata contain less than 1% of the total stream kilometers or result in less than three sample points, we often recommend grouping that stratum with another stratum.[CN142] Riparian & Wetland Designs   Similar to Terrestrial and Lotic, the general recommendation for riparian and wetland monitoring designs is to limit the use of strata unless minimum sample sizes are insufficient to report on specific areas or species of interest. However, for many designs we recommend stratifying by wetland size to ensure adequate sampling of small wetland features[CN143][YJ144][RV145] on the landscape. Small wetland features account for less of the resource on the landscape so stratifying by riparian and wetland size ensures adequate representation of small features. If stratification by size is not used, points more often fall in large wetland complexes. Work with the National AIM Team to determine if stratification by wetland size works well for the study area.   4.3.3.1 Remote Sensing Informs Stratification [add Shannon’s information here] 4.3.4 Step 4: Select and document supplemental monitoring methods; estimate sample sizes; set sampling frequency; develop implementation rules 4.3.4.1 Step 4a: Select and document supplemental monitoring methods (optional/if required)   When determining supplemental monitoring methods, consider the following guidelines: 1. [CN146][CN147]Decide whether supplemental indicators are necessary to meet management and monitoring goals. Keep in mind that adding supplemental indicators will require additional work in the field and beyond (see below).     2. If supplemental indicators are necessary to meet management goals and monitoring objectives, first evaluate the core and contingent methods to determine if these supplemental indicators can be calculated using a core or contingent method.  3. If a necessary indicator cannot be calculated from the core or contingent methods, select a supplemental method.  Select supplemental methods that are used by other monitoring programs or state/national regulatory agencies and are documented clearly in a peer-reviewed source such as a method manual or journal article.Other desirable characteristics of supplemental indicators and methods include relevance to Land Health Standards; ability to be measured objectively and consistently in many ecosystems by different observers; scalability; and applicability to multiple objectives.   a. Be sure to document the rationale for including the supplemental indicator as well as a citation for the method. The National AIM Team strongly advises against creating new methods or modifying existing methods.  [ML148][NS149][KA150][NM151] Prior to collecting supplemental methods data, data storage, training and in-field support processes must be identified. Tasks required to implement supplemental methods include: • 1. Identify data management protocol and tools for the supplemental method, including data recording, electronic data capture, data storage, quality assurance (QA) and quality control (QC), and analysis and reporting.   • Establish calibration standards for the supplemental method.  • Identify capacity to provide technical support for the supplemental method (e.g., who will answer questions about it during the field season).   • Plan sufficient training for successful implementation of the supplemental method. This training cannot occur during a core methods training, but it is recommended that it follow soon after so that field crew members can integrate what they learned during the core methods training. • Practice the supplemental method in the field to establish compatibility with AIM plot or reach layout and requirements (e.g., not walking on left side of a terrestrial transect, collecting water quality before instream lotic sampling begins)   • Consider the additional time required for a crew to complete the supplemental method at each sampling location. If the additional time required to collect supplemental methods impairs the crew’s ability to visit the desired number of plots or reaches, then the desired number of plots or reaches might need to be reduced.  Return to step 1b and ensure your supplemental indicator and methods will provide the specific data needed to address the management question(s). • [CN152]4.3.4.2 Step 4b: Estimate [CN153][YJ154][LTAC155]sample sizes (Completed in coordination with the National AIM Team) .   Determine if significant amounts of comparable, high quality monitoring data already exist, [PLJ156][CN157]if so, the required sample size may be smaller than when such data are not available as those existing points may be able to be incorporated into eventual analyses. See section 4.3.5 for more information on using previously collected monitoring data. For unstratified designs, all sampling effort is simply dedicated to the target population/sample frame. For stratified designs, the default method for allocating sampling effort is to proportionally allocate points based on the area/length that each stratum covers, e.g., a stratum that makes up 20% of the target population [NS158]would receive 20% of the total points to be sampled.  If stratification is used, ensure sufficient sample sizes for each stratum. For example, all strata should have at least 3 monitoring points. The recommendation is to start with the proportional allocation approach and then adjust sample sizes up or down as needed per-stratum. The number of sample points may need to be increased in areas that cover a small percentage of the study area to achieve a sample size sufficient to provide information for management decisions. For example, black sagebrush areas often occupy a small portion of the landscape but provide important sage grouse habitat, and thus will need to be well represented in a design that is focused on sage grouse.   If the desired number of points in one stratum are increased, others may have to be reduced to keep the total number of points sampled the same. Allocating zero points to any strata will limit the ability to draw inference to the entire landscape because the target population consists of only areas that may be sampled and strata with zero points are obviously going to go unsampled. Exclude points from a stratum only if you are willing to drop it from the target population because 1) the stratum is not part of the target population defined by the monitoring goals and objectives (e.g., open water in a terrestrial monitoring effort) or 2) the stratum is being monitored as a part of a separate monitoring effort and should not be monitored as part of this project. In either case, update the polygons representing the target population to exclude those areas. Point weights are the area (in acres or hectares) or length (in stream kilometers) represented by an individual sample location. Weights are used to generate statistical estimates of resource status or condition across the landscape (i.e., proportional estimates). Specifically, weights are used to adjust the relative influence that each point has on the final estimates; points with larger weights have more influence, and points with smaller weights have less. The weight of each point depends on the specifics of the design, how it was implemented (see section 5.3.3.1 regarding final designations[NS159][KA160]), and the reporting area of interest. Changing sample sizes in a given stratum will affect point weights and therefore should be done with care. As sample sizes are increased in a stratum, the area/length represented by each point is reduced, thus the point weights and relative influences are reduced. Instructions for filling out the remainder of the Sample Design Table: [ML161][LTAC162] When stratification is used, fill in the first row of the Sample Design Table with information regarding the entire sample frame.  Proportional area or length: Divide the number of acres or stream km represented by each stratum by the total number of acres or stream kilometers in the entire study area to get proportional areas/lengths. This will be 100% for an unstratified design. Proportional points per stratum: Calculate the proportional number of points per stratum by multiplying the proportional number of acres or stream km by the total number of points to be sampled. This will be the total number of points for an unstratified design. Final Points per stratum (optional): If a proportional allocation of points will not satisfy the monitoring objectives, adjust the number of points that will be monitored for each stratum. Calculate the number of sites to sample in each stratum, taking into account the the amount and quality of existing or legacy monitoring information, the amount of resource that needs to be monitored, statistical considerations, and funding and personnel limitations[NS163]. If points are allocated in a way that is highly disproportionate across strata, justification for the disproportionality should be documented alongside the table. Final point numbers normally refer to the total number of sampling locations visited within one sampling cycle (e.g., over 5 years). If specifying point number for a different time period, this should be specified in the sample design table. • Point weights: Once all the other columns in the Sample Design Table have been finalized, point weights can be calculated as the total number of acres or stream km within the stratum, divided by the number of points to be monitored for that stratum. For assistance in completing this section contact the National AIM Team, particularly for more complex revisit designs[NS164]. 4.3.4.3 Step 4c: Define revisit parameters (Use the Revisit Frequency Table to document decisions made in this section)[NS165] Determine the revisit frequency/interval and the number of years sampled per cycle. Most monitoring efforts need to be spread out across several years to accommodate field crew capacity and to ensure that interannual variability is captured by the monitoring data. Once the total number of sample points and the point weights have been calculated, determine how many years of sampling might be necessary to achieve the desired sample size. Factors to consider when setting revisit frequency include:  • Reducing bias from year-to-year climate variability (e.g., drought) by using a rotating panel design (where a certain number of points, all contributing to the same design, are sampled over several years), is recommended. Rotating panels help ensure that sample points are randomly distributed across the entire project area every year.  For example, a 20–year design with a 5-year revisit frequency would consist of 5 revisit panels, where each point is assigned a specific year in which it should be sampled. All points in the same panel will be revisited every 5 years for a total of 4 data collection efforts (cycles) at each point over the 20–year design.  In contrast, when specific geographic areas are sampled in only 1 or 2 years rather than during every year of the design, bias from climate variability can affect condition estimates. However, it may be appropriate in Lotic sample designs to sample only a proportion of the years in each sampling cycle based on logistical and funding limitations e.g., 2 years sampled out of 5.[NS166][YJ167] Detecting change in condition through time (i.e., trend) is a common monitoring objective that requires setting an interval for revisiting points over time. Questions to consider when setting revisit frequency include: • What revisit frequency makes sense relative to the disturbance or management event? For example, ES&R monitoring dictates annual re-visits for three years, whereas monitoring stream geomorphic changes following livestock removal might occur on a 3 to 5-year basis, and changes in upland condition might occur over 5-10 years. • How resistant and/or sensitive to disturbance are the areas that are being monitored? How resilient are those areas following disturbance events? Consider establishing more frequent revisit intervals in areas that are more sensitive or less resilient to disturbance than in areas that are highly resistant and resilient. • How variable and/or sensitive are the indicators utilized to evaluate the management objectives? Consider more frequent revisit intervals for indicators that are particularly sensitive to inter-annual variability in abiotic conditions. • [CN168][LTAC169][YJ170]What resources will be available (e.g., funding and personnel)?   The default revisit interval for Resource Management Plan effectiveness monitoring is every 5 years, unless natural conditions or management actions occur that would elicit landscape-scale responses on shorter timescales. A revisit interval of less than three years is discouraged due to the rate at which most changes will occur.   Set number of cycles and the total duration of the design– A cycle is a defined time period over which all panels are visited once, e.g., a design with five single-year panels would have a cycle duration of five years. The number of cycles in each revisit design depends on both the revisit frequency (or cycle duration) and total design duration such that numbers of cycles = design duration ÷ revisit frequency.    • In a typical design the standard number of cycles is 4 with a total design duration of 20 years (using a 5-year revisit frequency).   Set the proportion of design points which will be revisited – Depending on objectives, only a subset of points may need to be revisited. In general, trend assessments are most effective when  approximately 70-80% of the points sampled within a year or cycle are points that are being revisited. Factors to consider when determining the proportion of design points which will be revisited include:   • Revisitation involves resampling points and can help to explain changes over time. Higher proportions of revisit points mean more statistical power available to detect trend.    • Non-revisit[CN171] points add new sampling locations across the landscape and help to explain spatial variability in resources. Higher proportions of non-revisit points mean higher precision of condition estimates.   • If trend assessment is a priority and existing trend data are unavailable, a higher proportion of revisits will be beneficial. Conversely, if management goals are more focused on precise condition assessment at a single point in time, a higher number of non-revisits points will be preferred.   • In general, a good balance between trend and status estimates is reached using 70-80% revisits and 20-30% non-revisits each year.   • Some monitoring [KA172][CN173]efforts may not include revisitation at all, depending on various project constraints or monitoring objectives. 4.3.4.4 Step 4d: Develop implementation rules Implementation rules are the rules that guide the rejection, movement and merging of points. Standard rules are outlined in the lotic and riparian and wetland Design and terrestrial Data Management Protocols. Review the standard implementation rules to identify whether they need to be customized to meet the monitoring objectives. If so consult with the National AIM Team when developing the additional criteria to ensure the design will remain statistically valid. • Proper design implementation involves documenting the fate of each point in a given design. Documentation of point fate should be tracked using the Terrestrial, Lotic, or Riparian & Wetland Office Evaluation WebMaps. [PLJ174]4.3.5 Step 5: Collect and evaluate available data to determine sampling sufficiency and the validity of the strata (if available[CN175])[NS176]F In this step, existing data is used to determine if adjustments are needed to the sample sizes identified in step 4. Consult with the National AIM Team to implement this step. This step addresses the following question: “How much data should be collected across the study area to address the management goals and monitoring objectives?” Analysis of existing data and monitoring objectives will provide information about the number of points required to detect whether an objective for a particular indicator has been met (e.g., the number of sites needed to determine whether 70% of areas with the potential to support sagebrush have greater than 15% sagebrush cover).  Consider sample size requirements in terms of the management objectives and the available information needed for the decision. Look at multiple indicators and take a preponderance of evidence approach. For example, if one indicator requires many more samples than the others, then one may be able to rely on the preponderance of evidence from the other indicators to make a decision. If many indicators are showing insufficient information, then more monitoring points are likely needed.  Land Use Plan and many other AIM efforts seek to estimate the proportion of a resource (in acres for terrestrial and riparian and wetland ecosystems and kilometers for perennial streams) within the project area that are meeting or not meeting objectives, within a certain level of confidence. Given the goals of estimating condition, the general recommendation for such monitoring efforts is to take an approach that minimizes the likelihood of not detecting a difference in conditions when a difference actually exists (i.e., Type II errors).  From a statistical standpoint, the sample size required (e.g., number of plots or stream reaches) to determine the proportion of the resource that is achieving the desired conditions will depend on three factors: 1) the amount of existing AIM-compatible data, 2) estimated proportion of data meeting an objective, and 3) the desired confidence level. [NAM177][CN178] • For many new AIM projects, data are already available from other AIM monitoring efforts.[KA179][NAM180][CN181] Always evaluate and consider using existing data when determining sample sizes.  • Depending on monitoring objectives and previous sample date and condition, National and other AIM datasets may be used to offset sample size requirements for new monitoring objectives. At a minimum, these data can be used to help assess the proportions of a resource that are meeting an objective and help estimate the required sample size for monitoring objectives.  • If a high degree of confidence (e.g., 95%) is desired in the condition estimates derived from the data, then larger sample sizes are required. To balance the desire to minimize Type II errors (i.e., failure to detect a difference) with the need for a realistic workload, the specific recommendation is to establish sample sizes using an 80% confidence interval. If monitoring[KA182][CN183] data are to be used to support a contested management decision, higher percent confidence interval with smaller margin of error may be necessary.    To determine how much data is needed to address management goals and monitoring objectives, consider the following: A. Identify the indicators of interest and the proportion of the landscape that is likely in a given condition (e.g., % of landscape having suitable or unsuitable habitat). It can be helpful to look at existing data to estimate the proportion of sites currently meeting monitoring objectives as a starting point.    B. Select an appropriate confidence level for the monitoring objective. [PLJ184][CN185][KA186]   With the information identified above, the initial sample sizes can be estimated with sample sufficiency tables (see Appendix C)[CN187].    If increased precision and accuracy of the design is needed, a greater number of points will be necessary. If it is likely that additional points may be desired during the design implementation, additional oversample points should be added to the design during design creations. If there are sufficient points in the original design, oversample points may be sampled to increase the sampling intensity of the design at a later date. If adding more points is not feasible, because of resource limitations, an alternative approach is to accept a lower level of confidence for some reporting units. In these cases, data from other sources (e.g., remote sensing, use data) can be valuable for a multiple lines of evidence approach. [CN188][NM189] After each year of sampling[KA190][DJ191][CN192][KA193], designs should be reviewed to assess whether the current sampling intensity is appropriate or if increased intensity is needed to obtain a larger sample size. At the end of each revisit design cycle, designs should be evaluated to determine whether to continue revisiting the design or whether a new design should be created. 4.4.6 Step 6: Apply stratification and select monitoring locations In this step the final sample design is developed, reviewed, and documented. Project and state leads (with support from National AIM Team staff) must be sure to document how the design(s) were created, any additional notes, information on the sample frame, and what revisions were made and why. If the design process or sample sufficiency analysis resulted in different sample sizes than those identified in step 4b (section 4.3.4.2)[NS194], document those changes here as well. Consult with the National AIM team to implement this step.   Standard AIM designs use the GRTS algorithm to generate statistically valid, spatially-balanced, and random sampling locations (Stephens and Olsen 2004).    Several tools are available to complete statistically valid monitoring designs. For new designs, the standard approach is for the National AIM team to use the survey package in R to draw GRTS designs. For designs that incorporate previously sampled points the standard approach is modified to spatially balance new points around existing sample locations and revisit a proportion of existing locations. For terrestrial projects in small geographic areas (e.g., <10,000 acres), one-year designs, or designs that exclude some areas of the landscape and that don’t need to balance around existing points, there is the web-based Balanced Design Tool hosted by the Jornada Landscape Toolbox.  Designs created using the Balanced Design Tool use the same GRTS code as standard AIM designs and both are statistically valid and produce data that can be uploaded to the national AIM database. The Terrestrial National AIM Team does recommend that the design files [NAM195]be uploaded to the SharePoint for any designs created using the Balanced Design Tool for future analysis needs. For Lotic and Riparian and Wetland projects, all random designs no matter how small or short term are run through the National AIM Team. Once a draft design has been created, the National AIM team will share the design with the State and Project Leads[NS196][KA197] for review. The draft design should be reviewed to make sure it will meet design criteria described in steps 1-4.   Questions to ask when reviewing a draft design include:   • Are there enough points in all areas for which data are needed?   o Consider if stratification is needed if there are not enough points within an area o Consider rejection rates. Areas with high rejection rates may require additional oversample points. It can be complicated to add points to a design [NAM198][CN199]at a later date so ensure there are sufficient numbers to cover all rejections. • Are there any areas that were left out of the design that should have been included? Was this the randomness of the design or are updated GIS files needed to represent the full extent of the design area? • Is there any inappropriate clumping (i.e., too many points) of points in a certain area(s)?   If needed, work with the National AIM Team to iterate to further refine the sample design.   Once the final design is achieved, document the following: what tool was used to create the design, who ran the design, what (if any) modifications were made to the draft design, and where the design files are stored. If modifications were made, please include an updated and final version of the Sample Design Table as well.  4.3.6 Step 7: Data management plans [Insert [CN200][NM201][YJ202][KA203][RV204]text here] Review the standard QA and QC procedures for AIM efforts to ensure that you understand your roles and responsibilities when it comes to data management. General information on QA/QC can be found in Section 5.3.4 Ensuring Quality Data • Terrestrial protocols are described in the Monitoring Manual for Grassland, Shrubland, and Savanna Ecosystems and the Terrestrial Data Management Protocol. • Lotic procedures are found in the Lotic Data Management and QAQC Protocol.   • Riparian & Wetland protocols are described in the R&W Field protocol https://www.blm.gov/sites/blm.gov/files/docs/2022-05/March2022Draft_RW_AIMProtocol_FieldSeason_DataSheets.pdf o and The R&W Data Management protocol https://www.blm.gov/sites/default/files/docs/2022-08/R%26W_AIM_DataManagementProtocol_2022.pdf • Data management for BLM AIM efforts is supported by the National AIM Team through standardized electronic data capture and management. More information is available on the AIM website (www.blm.gov/aim)[KA205][RV206] under Data Management and Stewardship, and in the resource-specific Data Management Protocols linked on the AIM Resources page (https://www.blm.gov/aim/resources). • Document what data management and QA and QC procedures will be implemented during each field season, including whether you plan to augment the standard procedures. • For supplemental monitoring methods, additional data management plans and QC procedures will be needed, including training and electronic data capture and storage.  Document those procedures here. 5.0 Data Collection 5.1 Overview State leads, Project Leads, specialists, and other field office staff are a critical part of the data collection process. Whether or not they are collecting data, everyone has a role to play in data quality. The AIM principles include quality assurance and quality control for each step of the data collection process, thus ensuring quality data that practitioners can use with confidence. Quality assurance and quality control occurs throughout the process and is a responsibility of all participants. Quality assurance is a proactive process intended to minimize the occurrence of errors. Quality assurance measures include strategies like using electronic data capture tools with built-in data rules and documentation, required training, and field crew calibration. Quality control is a retrospective process which identifies errors after data have been collected. The ability to correct errors during the QC process is limited because points cannot be revisited with the exact conditions that occurred during the original data collection event. AIM data collection typically occurs during peak vegetative/seasonal expression of the ecosystems being sampled. It is often accomplished by dedicated data collection crews but can be accomplished by resource specialists in field offices too. Data collection involves the following steps which are elaborated on throughout this section[NS207]:     • Crew Hiring  • Point Evaluation & Rejection  • Implementation of Monitoring Designs & Trip Planning • Field Sampling  • Electronic Data Capture Applications  5.2 Tools [Insert text here] Lotic AIM Evaluation and Design Management Protocol[KEJ208] Lotic AIM Data Management and Quality Assurance and Control Protocol Terrestrial Data Management Protocol Riparian and Wetland AIM Design Management and Plot Evaluation Protocol Riparian and Wetland AIM Data Management Protocol 5.3 The Four Steps to Data Collection 5.3.1 Step 1: Preparation Once a sample design has been completed (Section 4, Design), the next step is to prepare for field season data collection at the points in the design. Data collection preparation begins with point evaluation, followed by monitoring design evaluation and trip (also called “hitch”) planning. For data to be ingested into the National AIM database, data collectors must meet data ingestion requirements. Refer to the Data Management Protocols for terrestrial, lotic and riparian and wetland for specific data ingestion requirements. 5.3.1.1 Personnel and Equipment Prep[KMA209] AIM data are typically collected by two- or three-person teams. Data can be collected by anyone including BLM staff & dedicated AIM crews. If data will be ingested into the national databases, data collectors will need to meet data ingestion requirements that include training standards (see section 5.3.2). When dedicated field crews will be utilized to collect data, crew hiring can be accomplished through a variety of mechanisms including contracting, through BLM seasonal hiring or through use of assistance agreements. When hiring specialized crews, crew hiring begins about 3 to 6 months [CN210][RV211]before data collection is scheduled to occur. Timelines and hiring mechanisms will vary for each state.  AIM crews are generally made up of two to three members, with one crew lead and one to two technicians. • Work with the AIM State Lead to determine hiring options.  The hiring process may require completion of a Task Order (contracting) or coordinating with the HR department (hiring through the BLM) or the partner organization (assistance agreements). • For reference: • A terrestrial field crew with 2-3 people can monitor approximately 50 plots per season.   • A lotic field crew with 2-3 people can monitor 25-35 reaches per season.   • A riparian & wetland crew with 3 people can monitor 25-35 plots per season. Crews need to be on board in time to attend the appropriate methods field course and any pre-course training. Data collection requires the use of specific instruments and equipment. Refer to the appropriate data methods manual or current equipment list (for lotic or riparian & wetland AIM core methods) for all core, contingent and supplemental methods that will be collected to ensure all necessary equipment has been purchased. Set up and calibrate equipment prior to data collection. If collecting data using supplemental methods is planned, additional equipment that is not included in the standard equipment lists may be needed. 5.3.1.2 Point Evaluation and Rejection Sample point evaluation involves screening for safety, accessibility, and ability to sample. Point evaluation may be conducted by the project lead, crew manager, or the crew lead. Office point evaluation should be completed before the start of the field season or immediately before the start of a scheduled field trip (i.e., hitch) and generally determines if crews should attempt to sample a point or if the point will be rejected without being visited. Additional point evaluation occurs in the field (which could lead to point rejection in the field) at the time the crew visits to attempt sampling.  Evaluating sample sites against rejection criteria: Rejection criteria allow for a consistent approach to tracking points which are not part of the target population (non-target), that are unsafe to sample, or are for other reasons unsampleable. When consistently applied, the use of rejection criteria will preserve the ability to make statistical inferences from the data while also maximizing efficiency and promoting safety during field sampling. Rejection criteria that are used to implement a specific design must be carefully considered during analysis and reporting because they can limit the inferences that can be drawn from the data. For example, in Terrestrial AIM, if all plots on slopes greater than 50% are rejected, then the monitoring data only describes the resource status on slopes less than 50%.  Sample points should be reviewed against rejection criteria in the office using ancillary data sources (e.g., ownership maps, topographic maps, and aerial or satellite imagery) and the same GIS data used to produce the monitoring design. If a point is accepted in the office, the data collection crew should review the rejection criteria again upon arrival at the point in the field. If a point is rejected in either the office or the field, it is important to document the reason(s) for rejection as this information is incorporated into data analysis. Points might not be sampled for a variety of reasons including access issues, safety concerns, and a point not being a member of the target population. Specific non-target rejection criteria have been developed for terrestrial, lotic, and riparian and wetland resources, see the terrestrial Data Management protocol, the Lotic Design Management Protocol or the Riparian and Wetland Design Management Protocol for more information. 5.3.1.3 Remote Sensing: Point Evaluations and Rejections[NM212] In conjunction with other GIS data layers, remotely sensed data is often used to help evaluate—and potentially reject—candidate points before they are visited in the field.  For example, imagery indices that highlight water (e.g., the normalized difference water index, a ratio of near-infrared to shortwave-infrared reflectance) could help determine if the point intersects aquatic features.  [KEJ213] 5.3.1.4 Monitoring Design and Trip Planning 5.3.1.4.1 Monitoring Design[CN214] When using probabilistic monitoring designs, maintaining monitoring design integrity while collecting field monitoring data is critical to retaining statistical rigor. Random monitoring designs utilize randomly selected monitoring locations (i.e., points) that are evenly distributed across the landscape. To maintain the statistical validity of the probabilistic monitoring design, it is critical that the points are collected in order and there are no “holes” in the sample design. Holes in a sample design are points where data was not collected and where the fate of the point was not determined. [NAM215][KA216][CN217] When a design is implemented, there are three possible outcomes for each sample point within the study area. 1. The data are collected at the sample point and those data will contribute to inferences about the target population. 2. The sample point is found to be outside of the target population (e.g., not on BLM land, or not in the resource of interest) and the point is [CN218]used to adjust our estimates of the true target population for the design, thus not detracting from the statistical validity of the monitoring design. 3. The data are not collected at the sample point for a particular reason (e.g., due to safety, inaccessibility, etc.) but the sample point [CN219]may still be part of the target population.  Best Practices for Implementing a Monitoring Design [CN220][KEJ221] Visit and evaluate sample points in the assigned order whenever possible to preserve the statistical validity of the design. Balancing logistics and travel efficiencies with sampling points in order can be tricky, the goal is both to avoid spatial patterns in the data and ensure that by the end of a field season all holes have been filled. If this becomes too difficult, reach out to the National AIM Team [NS222]for more guidance.  [CN223][KA224]Make detailed notes regarding the status (i.e[CN225]., [CN226]sampled, not sampled, or rejected) and designation (e.g.,[CN227][KA228][RV229] target, non-target, inaccessible) for all points that were evaluated in the office or in the field. Ask for help from the National AIM Team regarding questions about implementing the design. Sample Point Evaluation  AIM sample points should be evaluated in advance of the field season to determine how the crew will navigate to and sample the point. Points should also be evaluated to determine if they meet any of the rejection criteria and should be excluded from sampling. Point evaluation includes but is not limited to: • [CN230]Reviewing topographic maps and aerial imagery • Obtaining site information from other field office personnel • Traveling to the site in person[CN231] • Contacting private landowners to obtain access permissions and instructions • Documenting all information obtained during the point evaluation process in a consistent fashion.[CN232] If the person who did the point evaluation is not going into the field, the crew should be given the opportunity to review the information obtained during this process and ask questions prior to departing for the field.  5.3.1.4.2 Trip Planning Trip [CN233]Planning Steps:  1. With[CN234] the set of office-accepted sample points, Projects Leads, managers and crews/crew leads should take some time to think about how to visit groups of points efficiently, while paying attention to where each point falls within the order of the monitoring design.  2. Once there is a group of sites to potentially sample during the hitch, examine all of the scouting notes to determine what might be required of the crew to access the sites.  3. If necessary, contact BLM staff or private landowners responsible for overseeing access to obtain access permission, gate keys or combinations, and access instructions.  4. Obtain maps of all the areas slated for sampling. Seek local knowledge for information regarding current road conditions, places to camp and get water, etc.  5. Print all necessary information and upload digital copies onto tablets.  5.3.2 Step 2: Field [CN235]Methods Training[DCJ236] The BLM requires field crews to attend approved AIM protocol trainings for data to be ingested into national AIM databases. Project managers should ensure that data collectors receive proper training in the core methods. The Field Methods trainings provide standardized trainings for each of the AIM Resources to ensure that data collection is nationally standardized. Trainings include protocol instruction for all core methods as well as applicable contingent and covariate methods, supervised practice, and calibration and general guidance about implementing a random or targeted sample design. They also provide an opportunity to practice with an electronic data capture device. Each AIM Resource program has requirements for how often data collectors (including field crews, seasonal BLM staff and contractor/agreement staff) must successfully complete an AIM field methods training. Terrestrial data collectors are recommended that the entire field crew attend a field methods course at the start of each field season. The minimum requirement is that the crew lead has attended an AIM training in the last three years. Lotic and Riparian & Wetland data collectors (including field crews, seasonal BLM staff and contractor/agreement staff) must complete an AIM Field Methods training during the year in which data will be collected. Permanent BLM staff collecting lotic or R&W AIM data must have successfully completed lotic or R&W AIM field methods training within the last three years and review protocol updates every year as a refresher.[CN237] Instructor Field Methods trainings • To support the number of Regional Field Methods trainings that are needed, State and Field Office BLM staff are critical instructors. To ensure that regional trainings are consistent across the BLM, the National AIM Team hosts Instructor Field Methods trainings for the terrestrial and lotic resources. This training is directed towards state and regional core methods instructors. It provides specialists in different regions the skills that they need to be able to host locally adapted regional core methods field courses. Topics include a field protocol refresher, calibration, and discussion of successful training approaches.[AN238][KA239] • 5.3.3 Step 3: Data Collection 5.3.3.1 Field [CN240]Sampling Data collectors should verify that they have all necessary equipment ready to collect AIM data before going into the field. Once field season preparation is complete[CN241] and some amount of trip planning has occurred, it is time to begin collecting data. Data collectors and crews are required to always carry a copy of the appropriate methods manuals and data management protocols in case questions or unusual situations arise.   The final designation of each point must be documented as sampled, not sampled or rejected when it is evaluated either in the office or visited in the field.[DCJ242][CN243][DJ244] Field data should be reviewed when data collectors return from the field so that resource specialists can be consulted regarding any questions or concerns. It’s recommended that local resource specialist [KA245] periodically go out into the field with the data collectors, when possible, to ensure proper implementation of methods and help with site specific questions. Reaching out to the National AIM Team with questions is always encouraged as well. State leads are responsible for scheduling early-season, mid-season, and end-of-season [CN246]check-ins—or as required by the specific resource—with the National AIM Team to go over data and provide updates on the project status. For State and Project Leads, it is helpful to have the crew produce an end of season implementation report (See this End of Season Implementation Report for an example)[NAM247][CN248].  5.3.3.2 Electronic Data Capture and Data Management[YJ249][ST250][RV251] All AIM resources use ArcGIS Field Maps and ArcGIS Survey123 applications for digital capture of data in the field. ArcGIS Survey123 (Survey123[CN252]) is an Environmental Systems Research Institute (ESRI) data collection application that captures monitoring data. ArcGIS Field Maps (Field Maps) is an ESRI application which allows crews to navigate to [DCJ253][CN254]sample points, capture [CN255]GPS coordinates,[CN256] document field evaluation statuses, and launch Survey123 forms connecting all data to a point’s spatial location and unique identifier. [DCJ257][CN258] Different types of tablets can run Survey123 and Field Maps. Review the latest version of each resource’s equipment lists and Technology manuals for the minimum required specifications.  Terrestrial The Terrestrial AIM Team at the National Operations Center (NOC) strongly encourages individual field offices to purchase their own tablets. Terrestrial AIM data are collected using the Survey123 and Field Maps applications.  Sometimes it is necessary to supplement electronic data with paper data sheets if a tablet crashes or is temporarily unavailable. Note that paper data sheets should not be used in lieu of electronic data capture and should be used for backup only.  Data collected on paper in the field should be entered via the Survey123 application as soon as possible in the field season.    Riparian and Wetland  [add text here] Lotic  Lotic AIM data is collected using the Survey123 and Field Maps applications.  All lotic AIM data should be collected using these applications, and printable data forms are available should be carried as a back-up while conducting surveys.    The Lotic Technology and Applications Manual assists those collecting Lotic AIM data in downloading and signing into our communication and data collection applications, using Survey123 and Field Maps, sharing files and communicating with Team Members, preparing maps for field visits, field evaluation and data collection, and backing up and submitting data.  More information on iPad and data collection application use can be found in the Lotic Data Management and QA AND QC Protocol. Crews should read this document thoroughly prior to collecting data and consult it throughout the field season as questions arise. The Lotic AIM Data Management Protocol illustrates and describes the main user interface and data entry workflow of Survey123 and Field Maps.   5.3.4 Step 4: Data QC and Ingestion Prior to Data Use[NM259] Elements of Quality Assurance, Quality Control, Remote Sensing, and Benchmarks should be integrated into each step in the AIM process. The AIM principles were created to outline a standardized strategy, in which data quality assurance (QA)and quality control (QC) [NS260]play a crucial role. Quality Control and Revision [add text here from Chapter 7] [Insert text here[NAM261][DCJ262][CN263][CN264]] 5.4 Using remote sensing to evaluate critical concepts or an additional line of evidence [add Shannon’s text here][KEJ265] 6.0 Applying AIM Data: Analysis and Reporting[KEJ266] 6.1 Overview There are numerous ways that AIM data can be used to inform land management decisions. This document presents a standardized workflow to address the most common ways to use AIM and other field data: point specific, unweighted, and weighted analyses. This workflow is generalized from the steps outlined in BLM Technical Note 453 which specifically focuses on land health evaluations and authorizations of permitted uses. Another workflow is available for Land Use Plan [NS267]effectiveness on the BLM Land Use Planning Sharepoint. AIM data can be used both as their original MDW outlines and opportunistically if the data fall within the area of interest or assessment for an analysis. [CN268][NS269]For assistance with any of these steps, or additional analyses, please contact the AIM NOC Analysts. [LTAC270] In addition to standard workflows, examples of AIM data being used to inform decision-making can be very helpful. Several current AIM data-use examples are described on the BLM AIM resources page (https://www.blm.gov/aim/resources). Similarly, the AIM Practitioners recorded webinars highlight current applications of AIM data and related remote sensing products along with analysis tools that make data use easier (https://web.microsoftstream.com/channel/dd68a714-62c5-4b1b-b52a-2efd117b0001). Finally, decision documents and reports utilizing AIM data can be found on the AIM Sharepoint in the [KEJ271]AIM Community of Practice for Decisions[KEJ272]. [LC273][KA274]6.2 Tools[CN275] [KEJ276][LTAC277] The list of tools used for AIM data analysis is continually expanding – for an up-to-date list of current tools and tool training resources see this document. 6.3 The Nine Steps to the Standard AIM Data Use Workflow [NM278][NAM279][NM280] Figure 4: Analysis and Reporting Workflow 6.3.1 Preparing for an Analysis Much of the information for steps 1-5 of the analysis workflow can be found in the relevant monitoring design worksheets. Review these worksheets to identify the relevant management goals, indicators, benchmarks, and benchmark groups to use in your analysis. If you are conducting an analysis which doesn’t have a corresponding monitoring design worksheet or the worksheet is out-of-date, steps 1-5 will need to be completed with the planned analysis in mind. Step 1: Identify Management goals and Land Health Standards to be evaluated. Management goals and/or question(s) will guide the analysis regardless of whether the initial design or data was collected for the intended/current analysis. If available and applicable, refer back to the completed Monitoring Design Worksheet for the original management goals and monitoring objectives. If the management goal and monitoring objective at hand is listed, review and reference this document throughout the next 7 steps as needed. If a management goal is not outlined, the first step should be to outline the management goals and land health standards to be evaluated. Other helpful documents might include Land Use Plans, TN 453 appendix 1 on Land Health Standards, other policy and NEPA documents, and Biological Opinions. See section 4.3.1.1 for more information about developing management goals. [PJ281][CN282]Step 2: Obtain Available Data Within the Area of Interest [LKS283]Gather all available data within the area of interest for the management goals identified in section Step 1. Use the data portals and other tools to visualize your data to better understand the amount and type of data that has been collected in your analysis area and each reporting unit. This will help to inform each successive step below, particularly Step 7. This section will focus on AIM data but other data should be considered and incorporated if applicable including: other long-term monitoring datasets (e.g., MIM, PFC, IIRH, frequency transects, photo plots, etc.,), covariate data (e.g., precipitation and climate data from PRISM, GridMET, local weather stations), and short-term monitoring and use data (e.g., grazing utilization, recreational use information etc.,) There[LTAC284] are several ways to access AIM data. The most effective data access method depends on a data user’s computing environment: working from a DOI or BLM office (DOI or BLM internal); out of the office but on the BLM virtual private network or VPN (DOI or BLM internal); out of the office but not on the VPN (external); and non-DOI public access (external). The desired application or use case for the data is an additional consideration.   There are three main types of AIM data:     1. Calculated indicators of ecosystem health for uplands (terrestrial ecosystems), streams and rivers (lotic ecosystems), and riparian and wetland areas.  These are also known as AIM indicators or AIM indicator data.    2. Raw data used to calculate AIM indicators.  These are the direct measurements recorded in the field using AIM methods as described in the AIM protocols.       3. Site photos showing each AIM monitoring location each time it was visited.      AIM data can be accessed using several different tools, each designed for a specific purpose or audience:     • AIM Data Portals (DOI Internal) – These web maps show AIM sampling locations and provide access to associated data and metadata.  Users can connect to, filter, download, and share data. These lightweight applications can only be accessed by users within the BLM & DOI (i.e., logged into the DOI network).  o AIM Indicators Data Portal – Lotic and Terrestrial calculated indicator data (no raw data)  o Lotic AIM Data Portal – Lotic calculated indicator data, raw data, and site photos  o Terrestrial AIM Data Portal  – Terrestrial calculated indicator data, raw data, and site photos  o Riparian & Wetland AIM Data Portal – planned for Fiscal Year 2023    • \AIMDataTools\ (BLM Internal) – This folder on the BLM’s network drive contains multiple ways of accessing and interacting with AIM data, including:  layer files for ArcMap and ArcPro; current AIM indicator data in MS Excel format by state and by AIM resource; Python scripts; AIM database connection files; pre-configured map documents; and links to AIM web resources and web maps.     • ArcGIS Enterprise Geodatabases (DOI Internal) – Database connections enable users to access the AIM GIS databases in ArcGIS for diverse applications, including data analysis and map making.  For ArcGIS users working remotely, it may be faster to access these connections using ArcGIS on Citrix.    • BLM GeoSpatial Gateway (BLM Internal):  This is the internal BLM Sharepoint site for sharing national datasets within the BLM. See the Terrestrial and Lotic pages for links to overall AIM program descriptions, data, metadata, and pre-configured map documents.    • BLM Geospatial Business Platform AIM Hub (External, Not on VPN): The BLM Geospatial Business Platform hosts BLM data and metadata for access by the public.  AIM calculated indicator data is available to view and download on this portal.  This portal does not provide raw data or underlying databases.  Also, this portal is usually updated several months after the DOI and BLM internal datasets are updated.  The BLM AIM Databases are updated annually after field data is finalized and indicators are calculated[NAM285] and QC’ed.[PJ286][PLJ287][LKS288][PLJ289] If there are data needs prior to this update, contact the State Lead or Data Analyst who can work with the National AIM Team data managers. [NM290][CN291][KA292] As well as AIM indicator data and photos, gather GIS information related to the analysis including reporting unit polygons and information on the relevant benchmark groups (see section 4.3.2 for more details on reporting units and benchmark groups). For help interpreting AIM indicators and indicator metadata, consult the relevant resources’ metadata documents linked in each data portal and/or contact the relevant state lead, state analyst or NOC AIM analyst.[NAM293][CN294] [PJ295]Step 3: Select Indicators for Evaluating Goals Each management goal should be tied to one or more indicator(s) in order to evaluate that goal using AIM data. Firstly, review the standard indicators calculated by the national AIM team in the relevant resource’s database. There are many existing resources to crosswalk these indicators to common management goals such as land health standards including: Appendix 1 of BLM Technical Notes 453, the Habitat Assessment Framework Technical Reference and associated state RMP-amendments, as well as peer-reviewed literature. The standard indicators calculated by the national AIM team comprise only a subset of the potential indicators that can be calculated from AIM methods. If additional custom indicators are needed for a particular analysis but are not available from any of the AIM databases, contact the relevant NOC or state analyst to calculate these or calculate these using the available tools (see section 6.2 Tools). Step 4[ML296][LTAC297][ML298]: Set Benchmark Values or Define Condition Categories[NM299] A fundamental piece of making defensible management decisions is using clear and understandable rationale for how you use data to draw conclusions.​ For example the BLM Rangeland Health Standards Handbook states that land health evaluations require a "consistent, defensible approach to drawing conclusions; an approach that is logical and provides a pathway between data, indicator, standard and conclusion"​. To do this you need some way of connecting your data to your conclusions. Benchmarks can act as a bridge to connect your data to your conclusions. Benchmarks help turn policy statements such as “take appropriate action” or “make significant progress toward fulfillment of land health standards” into specific, measurable objectives.​ Benchmarks are simply indicator values, or ranges of values, that describe desired conditions, that, when crossed, may prompt some type of action or indicate management success. Benchmarks provide a quantitative way of classifying AIM indicator data into two or more categories. They are often applied to indicator data from points which have similar ecological potential or may respond to management actions similarly (benchmark groups) and thus reflect the condition of that area relative to its potential. Applying benchmarks which are specific to the ecological potential of each site allows for an easy comparison of sites across large areas and summaries of condition at the scale of the reporting unit no matter how heterogenous that landscape may be. Benchmarks are most often set with the intent to compare monitoring data to desired or reference conditions. However, in the absence of quantitative information regarding desired/reference conditions, descriptive benchmarks that do not relate to desired or reference conditions can also be used to categorize indicator data. This may facilitate exploratory data analysis or improve data visualizations. However, care should be taken to avoid using arbitrary benchmarks to categorize the indicator data as this can affect interpretation of analysis results. This step should also involve setting the other components of your monitoring objectives including the proportion of the resource required to be meeting the benchmark, the time frame to be considered, and the desired confidence level. See section 4.3.2.2 for example monitoring objectives. For more detail on methods for setting benchmarks see Appendix B: Setting Benchmarks and Appendix 2 of BLM Technical Note 453. Step 5: Identify Relevant Plots/Reaches and Assign Benchmark Groups[ML300][LC301] Before beginning analysis, ensure that you have selected the applicable plots/reaches to address your monitoring objective and then assign benchmark groups to each point. For your analysis it may not be appropriate to include all points collected in the area of interest. For instance, you may want to pare down your data by years sampled, point selection type (e.g., random vs. targeted), available indicator data, or some other defining characteristic. For example, if there are points across an entire field office but the desire is to evaluate a sage grouse habitat objective, only the points that are within sage grouse habitat should be considered for that particular objective. Another example is perhaps the intent is to conduct a weighted analysis to evaluate overall stream condition in the field office. For this analysis you would likely want to limit your data to random points and only data from the years of the design cycle that you are using for your analysis. Benchmark groups (ideally GIS files, if applicable[PJ302]), are groups of monitoring points that have the same benchmark value for evaluating the success of a particular monitoring objective. These groups may be determined by a geospatial layer, plot/reach characteristics (e.g., stream width, ecological site) or some other defining feature. After selecting the points to use in the analysis, assign each point to their appropriate benchmark group(s). 6.3.2 Conducting an Analysis[NM303] The NOC and partners on the national AIM team are available to complete any of the following steps or provide guidance and support as needed. To request an analysis from the NOC, submit the following information to the appropriate NOC Analyst. While this is not compulsory it will help to expedite your request. Analyst Request Checklist: • Spatial layers describing reporting units • A list of points to include or exclude in the analysis and some justification for excluded points (if not all points within each reporting unit) • Monitoring objectives or a list of classified points – this may be in the form of a completed benchmark tool • Deadline for analysis results • Points of contact • A succinct statement of the analysis objective Step 6: Apply Benchmark Values and Document Which Plots Achieve Benchmarks The first phase in conducting most analyses is to complete the Benchmark Tool. Many analyses[PJ304] may use the results of the Benchmark Tool directly including a weighted, trend, or causal analyses. Simple summary statistics or analyses without benchmarks do not require a filled-out benchmark tool. The Terrestrial Benchmark Tool [PJ305] • Stores and uses monitoring objectives (including benchmarks, benchmark groups, and percent achieving desired conditions) • Can be used for multiple reporting units and benchmark groups The Lotic Benchmark Tool: • Stores and uses monitoring objectives (including benchmarks, benchmark groups, and the number of sampled reaches in different conditions) • Can be used for one reporting area. Note: One benchmark tool can include multiple reporting units if the benchmarks and monitoring objectives DO NOT change between reporting units. Otherwise, use one tool for each reporting unit.[NM306][RV307] Riparian & Wetland AIM benchmark and analysis tools: currently under development. Step 7: Determine Appropriate Analysis There are many ways to use AIM data including but not limited to: at the individual point scale, landscape scale analyses, and combining AIM data with other with other types of information. Since there are variety of different ways to analyze AIM data it can be helpful to focus on several key factors to help choose the most appropriate methods. These include: • Type and amount of data available in your reporting unit • Your management objectives or analysis objectives • Required statistical rigor • The time, resources, and staff available For examples of how each of these factors may guide your choice of analysis approach see Table XX. Table XX. Common analyses using AIM data and considerations for each [ML308][LTAC309] Single Point Analyses Some management decisions or questions only require monitoring data from a single plot or stream reach to address them. For example, in evaluating the effectiveness of restoration or land treatments at a specific location, only the plot/reach specific condition is needed to decide whether management was successful. This data is often used as part of a multiple lines of evidence approach (see Step 9). Alternatively, only data from one plot or reach in the area may be available where a management question needs to be addressed. For example, a permit authorization renewal can use AIM data to supplement data collected at key areas. It might be that only one AIM point falls in the desired area. However, even one point can provide one line of evidence in a multiple lines of evidence approach. AIM data from a single point can be analyzed using a simple data summary table or figure to present the relevant indicator data. Monitoring objectives and benchmarks can also be applied to individual plots/reaches to get an understanding of condition.[RV310] Even if only one point is available or needed to answer management questions, monitoring data including AIM data from a broader area can be used to give context to the indicator values and condition of an individual point. For example, comparing an individual plot/reach to a broader range of ecologically similar locations can help to interpret condition and how that relates to differing management history. Un-Weighted vs. Weighted Analysis Summaries of resource conditions across an area (multiple plots/reaches) are often needed to support management decisions. For example, a Land Health Evaluation will often use multiple monitoring locations across a grazing allotment. Likewise, Land Use Plan Evaluations require knowledge of conditions at locations across a Land Use Planning area. Both unweighted and weighted analysis approaches can be used to summarize data within an area, however each have specific benefits and limitations. When using unweighted analyses, it is important to remember that the inference is limited to the area in which the data was collected (the plot or reach) and can’t be extrapolated to non-sampled areas. Unweighted analyses also do not account for any sample design information or potential overlap of multiple sample designs. This may result in spatial bias, and an under estimation of variance, particularly in analyses with several complex designs so care should be taken when selecting points for unweighted analyses. Because of these limitations, it may be best to use unweighted analyses alongside other lines of evidence such as remote sensing estimates, other long-term monitoring datasets, and professional judgement. However, unweighted analyses can be conducted when there is not adequate sample size to complete a weighted analysis, you don’t have time or capacity to do a weighted analysis, or do not have the underlying sample design information. Unweighted analyses may also be more appropriate when conducting very broad scale or preliminary analyses when time to analyze large numbers of sample designs is limited. Weighted analyses account for the number of acres or stream kilometers that each monitoring site represents, otherwise known as a weight. When selecting an analysis approach, it is important to take into account the underlying sample designs. Targeted points or points which were implemented non-randomly cannot be used to make a statistical inference and thus cannot be used in a weighed analysis. Weighted estimates are appropriate only for data coming from random sample designs. If the design was stratified, a weighted estimate will help to compensate for any spatial bias from point clustering. Un-Weighted Analysis Many management decisions can be made without explicitly accounting for the weight of each plot or stream reach. In general, these are unweighted analyses. Two commonly used unweighted analyses are calculating indicator summary statistics and point counting analysis. When using un-weighted analyses, it is still important to put the data in the context of benchmark expectations because this enables comparison between areas of differing ecological potential. [LC311]a) Indicator Summary Statistics Calculating summary statistics using AIM indicator data such as means, standard deviations, percentiles, and counts can be an important first step in understanding your data in the initial stages of an analysis. This can also be an important part of using AIM data to set benchmarks or summarize baseline conditions (see Appendix B: Setting Benchmarks). It is important to note that if you are using indicator values for your analysis rather than conditions, you should consider whether benchmarks differ among points. If so, you should consider grouping summary statistics by benchmark group or converting indicator values to a ratio of observed value to benchmark value to normalize across those unique benchmark groups. If benchmarks do not differ, using raw indicator values is appropriate. An important part of summarizing indicator data is how the data is visualized. For examples of methods to visualize indicator summaries see Step 8: Communicating Results. b) Point Counting Analysis To do an unweighted analysis, one can simply count the number of plots/reaches in a given condition (also known as a point-counting analysis). [LC312]When conducting unweighted analyses, it is important to keep in mind how plots or reaches were selected and watch out for any spatial clustering of points because that may influence results and interpretation of the data[PJ313][LC314]. This is because points may be subject to spatial bias when sample design weights are not taken into account. The Terrestrial and Lotic Excel Benchmark Tools[PJ315][LC316] can assist in completing unweighted analyses. It is important to note that a plot/reach counting analysis does not allow for inference to area beyond that which was sampled. To assess the percent of the landscape in a given condition, a weighted analysis is required. Since this approach is relatively quick and easy to conduct it can be used to identify areas you would like to look at more closely (e.g., for treatment/restoration prioritization planning). Weighted Analysis If the AIM data in the area of interest were collected using a spatially random sample, one specific analysis option to consider is a weighted analysis[NAM317]. A weighted analysis produces the percentage of the resource in a given condition on the landscape, with known level of confidence. An example result of a weighted analysis is: “75% (+/- 8%) of brood-rearing sage grouse habitat is in suitable condition.” Criteria for using a weighted analysis: • Policy requires weighted analysis (e.g., X% of resources are in a certain condition) • Large area/stream extent • More than 10 monitoring points are available from a probabilistic sample design, or capacity to collect this number is present • Known level of confidence is desired • More complex resource decisions Many common land management decisions will not require a weighted analysis. For example, many grazing authorization renewals are for relatively small areas with insufficient capacity for collecting more than 10 monitoring plots/reaches. However, if a situation matches most of the criteria for weighted analysis and there is interest in this analysis, then a weighted approach should be considered. Sage grouse habitat assessments and Land Use Plan effectiveness are common applications of weighted analyses. To understand the percentage of the resource in a given condition, a weighted analysis is required. Point weights can be calculated from the total extent or amount of a monitored resource divided by the number of monitoring points. For example, if a 10,000-acre grazing allotment has 10 evaluated points in it, assuming a very simple, unstratified sample design, the weight of each point is 1,000 acres (10,000 divided by 10). Weights are used to generate proportional extent estimates of resource status or condition across the landscape. Specifically, the weight is used to adjust the relative influence each point has on the final estimates; points with larger weights have more influence, and points with smaller weights have less. The weight of each point depends on the design and how it was implemented as well as the reporting area of interest. Calculating weights can be complex as it depends on all the underlying sample designs and the stratification of those designs. Contact the National AIM Team to request a weighted analysis to determine proportional extent estimates of indicator(s) condition for your area of interest.[KA318] Once a request has been received, the National AIM Team Analyst will: 1. Ensure all the required documents are complete and all the required data has been received 2. Fill in the design database with updated point tracking information 3. QC the monitoring objectives and reporting area polygons(s) 4. Calculate weights 5. Calculate proportional estimates and confidence intervals 6. Generate figures and tables with the results 7. Send a summary of the results back to the individual requesting the analysis Other Types of AIM Analysis Analyzing Trend If you have multiple years of data, you can consider comparing your data among years.​ It is worth noting that trend analysis and weighted analysis aren’t necessarily mutually exclusive. You could perform weighted analysis separately on two time periods of data and then compare the conditions across those two and compare the results using statistical tests such as ANOVAs – this would test to see if there is a significant difference between time periods which would be indicative of a trend or response to some disturbance or management action. In general, comparing two time periods of 5 years each will help to control for effects that could be caused by high interannual variability in weather conditions between years. Alternatively, a regression analysis could be used to model the rate and direction of change over time and could also incorporate additional variables such as precipitation or other climate information. Trend analyses can also be completed at the individual point scale to assess change over time at a single point. Common uses of trend analyses include: • Evaluating trend of an allotment for a grazing permit renewal • Reporting long term trend for affected environment sections of NEPA documents • As part of a causal analysis (see below) Causal Analyses Causal analyses are effectively an extension of trend analyses which focus on attributing a change in a resource (effect) to a specific cause. Determining the cause of a change in a resource is inherently difficult in ecology due to the complex interactions between disturbance, management actions, historical uses, climate, and weather. Due to this complexity causal analyses often require an experimental design – one common and powerful design is the Before After Control Impact (BACI) design. While most AIM designs are not explicitly BACI designs, this framework can be applied post-hoc if there are enough random pre and post disturbance points as well as undisturbed points which could be used as controls. One example of using this framework is to evaluate the effectiveness of a vegetation treatment or restoration action. Ideally, you would have several years (e.g., 5 years) of pre-treatment data (to control for unusually wet or dry years) as well as several years of post-treatment data (some treatments may take 5-10 years to recover). Ideally sites with similar climate, weather, and ecological potential should be selected to control for non-treatment effects and then subset into control and impacted groups. In this example the control sites would not be affected by the restoration action but would have similar climate and ecological potential to the sites that are which will allow for direct conclusions about restoration effectiveness. Several more examples of when causal analysis may be helpful: • Determining whether permitted uses and other BLM management activities are responsible for degraded conditions • Addressing degraded conditions due to upstream or adjacent activities • Stressor prioritization[PJ319] i.e., determining which stressors are the most extensive or influential across a reporting unit Causal analyses may require larger amount of data, resources, planning, and analysis time – reach out to the national AIM team for support whenever needed. Remote Sensing[LTAC320] Analyses Because AIM methods are consistent across all BLM lands and between other agencies, AIM data can often be combined with other agencies data, other field monitoring information, information from satellites, and other aerial sensors to generate satellite derived maps of many different indicators​. Most remote sensing products use these various datasets within empirical models to predict vegetation, soil, and water indicators continuously across a landscape. It is important to note that while remote sensing models can be very helpful, they are best used alongside monitoring data in a multiple lines of evidence approach when used in decision making. The AIM program has shared our data with at least 100 different groups, many of which are using the data to create new map products​. Some example products include: • ClimateEngine​ • Climate Restoration Tool​ • Evaporative Demand Drought Index​ • Global Forest Watch​ • LandCART • LandFire​ • National Fire Situational Awareness​ • National Wetlands Inventory​ • RAP • RCMAP • Sage Grouse Initiative​ • The National Map​ • TNC Resilient Land Mapping Tool​ • UN Biodiversity Lab​ • USGS EarthExplorer​ • Web Soil Survey​ • Many more​ Guiding principles for using satellite-derived maps​ 1. Use maps within a decision-making framework​ 2. Use maps to better understand and embrace landscape variability​ 3. Keep error and uncertainty in perspective​ 4. Think critically about contradictions See Allred et al. 2022 for more details. Since there are many different remote sensing models and tools, choosing the appropriate product for your analyses is an important step in any remote sensing analysis. BLM Technical Note 456: Evaluation of Fractional Vegetation Cover Products provides an analysis of three commonly used remote sensing products and discusses appropriate uses of each for natural resource programs and decision making. Example analyses using remote sensing: • Treatment/restoration planning and prioritization – Remote sensing products can be used to identify areas that warrant further investigation. For example: o Are there certain regions within reporting units that are of more concerned than others? o Using the preponderance of evidence approach are there certain areas with several indicators in degraded condition? • Rapid assessment of trend • Treatment effectiveness For more background on different remote sensing products or for support in remote sensing analysis contact the National AIM team remote sensing specialist, AIM state leads or monitoring coordinators, and/or your state or National AIM team analysts. 6.3.3 Interpreting Results Step 8: Communicating Results The final analysis step is to document, visualize, and interpret the analysis results in the context of your management goals. Interpreting results is the responsibility of the data user. Project Leads/field office specialists are the expert on each field office’s management goals, field data, and management history. All of this local and historical knowledge should be used at this step in order to contextualize the analysis results. Data Visualization How data is visualized plays an important role in communicating results[LTAC321] and helps to intercept complex analyses. There are a vast number of ways to display data, below are some examples that have been used to display AIM data in decision making. See also section 6.2. Tools for tools which can help visualize different types of AIM data. Boxplots and histograms When summarizing continuous indicator data across a reporting unit it is helpful to visualize the data’s distribution i.e., the range and frequency of difference values. Boxplots as well as histograms are a good method for this. Boxplots are particularly helpful since they display several key summary statistics of your data: the median, interquartile range, overall range, as well as any potential outliers. When paired with point data, they can also display the sample size which may influence how the data is interpreted – areas with very little data (e.g., < 30 points) are likely to give less precise estimates. Figure 5. Example of a single indicator summary visualized with a boxplot, points, and colored coded by quantile. Points and error bars Points with associated error bars are a useful method for communicating categorical data such as proportional extent estimates from a weighted analysis. In this case, the points represent the estimate from a statistical analysis and the error bar represent confidence intervals describing the uncertainty around each estimate. The larger the confidence interval, the less precise the estimate and the greater the uncertainty. Figure 6. Color coded points with error bars used to represent proportional extent estimates and confidence intervals Stacked bar plots Here is an example of another way to display your data. This is a stacked bar plot and conveys some of the same information the individual condition plot above.  In this example, instead of the bars or points displayed separately for good/fair/poor conditions, bars are stacked for each indicator we are reporting on. This is possibly more appropriate for point counting rather than weighted analysis because you cannot display confidence intervals on these plots. However, this is another concise way to visualize data across indicators, years, or field offices. Maps and photos Displaying AIM data spatially can help to put indicator data in the context of the surrounding landscape and helps to illustrate any spatial patterns in conditions. Maps are also particularly important for weighted analyses as they can display the area within each reporting unit that is within the statistical inference of that analysis and inversely, areas which that analysis cannot extrapolate to. Figure 7. Results of a weighted analysis displayed spatially. Points display monitoring locations color coded by benchmark categories. Area colored in tan are within the inference area of the analysis whereas areas in gray are parts of the reporting unit that the analysis cannot infer to. Photos can be helpful during the initial stages of data exploration to corroborate indicator data and benchmark group assignments as well during the reporting results phase to illustrate conditions across a reporting unit.   Figure 8. An example of using before and after photos in a stream restoration effort. Another great way to use photos is before/after photos. This can be especially instructive in cases where management of an area has changed over time or there have been some restoration or reclamation efforts.  Figure 9. An example of using photos alongside a map in a lotic report. Dashboards and StoryMaps Dashboards, StoryMaps, and their underlying web maps provide a more interactive display of data and can be a lot more versatile compare to static reports. These also provide an excellent forum for adding contextual data to help interpret AIM data such as remote sensing information, climate data, additional monitoring data, disturbance layers, and other GIS data. Figure 10. Example of a StoryMap displaying AIM data including analysis methodology. Step 9: Decide whether management goals have been met Using Multiple Lines of Evidence[LTAC322] Once attainment for each monitoring objective has been reported and visualized the next step is to combine all sources of evidence for each management goal and determine whether or not each management goal is met. In the case where there are multiple monitoring objectives or multiple indicators per management goal it may be helpful to use a method of structured decision making in order to combine objectives. This might include allocating relative importance to different indicators or data sources, using a preponderance of evidence approach, or working through each goal with an IDT. This step should include all sources of evidence including remote sensing data, other monitoring and trend data to evaluate each management goal. Once management goals have be evaluated there may be a need for additional analyses, for example determining the cause of current conditions using a causal analysis or focusing a more detailed analysis on an area of concern. 7.0 [LTAC323]Glossary[PJ324][RV325][KA326] AIM: The Assessment, Inventory, and Monitoring program [PLJ327][NAM328][CN329]which provides an approach for integrated, cross-program assessment, inventory, and monitoring of renewable resources at multiple scales of management as well as standardized, broadly applicable monitoring methods and tools consistent with the AIM Strategy. Analysis: The process of turning monitoring data into information to answer a question. Base Points: The original set of points in a panel of a design which are intended to be sampled in a given year.[PJ330][CN331] Benchmark: An indicator value, or range of values, that establishes desired condition and is meaningful for management. Benchmarks are used to compare observed indicator values to desired conditions. [CN332][PLJ333]Benchmarks for a given indicator may vary by potential [CN334]thus different benchmark groups may be necessary within a project area so that points are understood as meeting or not meeting an objective relative to potential. Benchmark Group: A geographic area or group of monitoring points that have the same benchmark for evaluating the success of a particular monitoring objective. For example, if there are points across the entire field office but evaluating sage grouse habitat is the objective, only the points that are within sage grouse habitat should be considered for that particular objective. Likewise, the ecoregion, ecological site, the evaluation area, or stream type must be considered for determining whether an objective is met when benchmarks vary by ecoregion or Ecological Site Description[CN335]. Biophysical Setting (BpS): A remote sensing-derived layer that is conceptually very similar to NRCS Ecological Sites. BpS represents the vegetation that may have been dominant on the landscape prior to Euro-American settlement. BpS is based on both the current biophysical environment and an approximation of the historical disturbance regime. BpS describe the following physical characteristics of a BpS environment: vegetation, geography, biophysical characteristics, succession stages, and disturbance regimes (and major disturbance types). Colorado State University’s Colorado Natural Heritage Program (CNHP): An AIM science partner that provides science support for the Riparian & Wetland AIM program through research and development of the field methods protocol, training support, data stewardship, indicator development, and sample design and analysis support. Condition: The status of a [KA336][DCJ337][CN338]resource in comparison with a specific reference value or benchmark (adapted from Bureau of Land Management Rangeland Resource Assessment-2011). When describing condition, a condition category may be assigned (e.g., Suitable, Marginal, Unsuitable or Minimal, Moderate, or Major departure) relative to the benchmark or reference value. Confidence Interval: Range of values that likely includes the true value of a population mean. Confidence intervals help understand uncertainty in indicator estimates. The confidence level indicates the probability that the confidence interval includes the true value and is chosen by the monitoring data user. For example, an 80% confidence level [CN339][KA340]indicates that 80% of sampling events will result in estimates that fall within this range; 20% will not (Elzinga et al. [KA341]1998). Contingent Methods[CN342][KA343]: Standardized procedures for collecting data with the same cross-program utility and definition as core methods but measured only where applicable[CN344]. Contingent methods are not informative everywhere and, thus, are only measured when there is reason to believe they will be important for management purposes. Core Methods: Standardized procedures for collecting data that are applicable across many different ecosystems, management objectives, and agencies. Covariate: A measured or derived parameter used to account for natural spatial or temporal variation in a core, contingent, or supplemental method or indicator; covariates help determine the potential of a given reach/plot to support a given condition or to assist in interpreting the monitoring data. Data Management: Organizing and storing data[CN345] so that they can be accessed and used to create information for management decisions. DIMA: Database for Inventory, Monitoring, and Assessment (DIMA) (now obsolete) a MS Access application developed by the Jornada Experimental Range to collect field data, manipulate data in the office, and run preliminary reports. [PLJ346][NAM347][CN348]As of 2022, the Jornada no longer supports DIMA, and the AIM program has transitioned to ESRI[CN349] products for data collection. Ecological Site Descriptions (ESDs): Information and data pertaining to a particular ecological site is organized into a reference document known as an Ecological Site Description (ESD). ESDs function as a primary repository of ecological knowledge regarding an ecological site. ESDs are maintained on the Ecosystem Dynamics Interpretive Tool[PJ350][NAM351][DCJ352], which is the repository for information associated with ESDs and the collection of all site data. (NRCS, 2017) Ecological Sites: An ecological site is defined as a distinctive kind of land with specific soil and physical characteristics that differ from other kinds of land in its ability to produce a distinctive kind and amount of vegetation and its ability to respond similarly to management actions and natural disturbances. ES&R: Burned Area Emergency Stabilization and Rehabilitation, planned actions to stabilize and prevent unacceptable degradation to natural and cultural resources, to minimize threats to life and property resulting from the effects of a fire, or to repair/replace/construct physical improvements necessary to prevent degradation of land or resources. Final Designation[CN353]: Final outcome of a potential monitoring point identified in a monitoring design. The final designation of the point has implications for how points are used in analyses and the subsequent inference to reporting units. Categories are as follows: • Sampled points are locations on BLM lands where monitoring data were collected[PJ354][PJ355] • Inaccessible points are on BLM lands, but the data collectors could not physically access the site (e.g., needed to cross private land and access was denied, road was washed out) • Non-target points are locations that upon further review were determined to not be part of the target population, (e.g., points not on BLM-managed lands). • Unknown points are those for which their fate was not recorded or points that were not assessed, as such we do not know if they are within the target population or not. • Not Needed points are locations that were selected for the design but do not need to be sampled because the necessary sampled sizes were obtained, or the definition of the target population changed (see Sample Size).[PJ356] HAF: The Sage-Grouse Habitat Assessment Framework, a method to consistently evaluate suitability of sage-grouse habitat across the range and at multiple scales. Indicator: A component of a system whose characteristics (e.g., presence or absence, quantity, distribution) are used as an index of an attribute (e.g., biotic integrity) that is too difficult, inconvenient, or expensive to measure. Intensification: An effort that increases the density of monitoring locations within an area of special interest to increase the accuracy (mean estimate closer to the population mean) and precision (smaller confidence interval) of indicator estimates. Typically performed in anticipation of special management decisions (e.g., permit renewal) that require greater accuracy and precision than provided by existing monitoring designs within the same area. Alternatively, performed because special areas have few or even no monitoring locations. The Jornada: A USDA-ARS unit, the Jornada Experimental Range has partnered with BLM to develop and support AIM since 2006.The Jornada works with BLM Field, District, and State offices as well as the NOC and Washington office to implement AIM and analyze AIM data. LHS: Land Health Standards, statements of physical and biological condition or degree of function required for healthy sustainable rangelands. [KA357] LMF: Landscape Monitoring Framework, a national scale dataset that along with TerrADat make up the Terrestrial National AIM Database. A joint venture between BLM, Natural Resources Conservation Service (NRCS)[DCJ358], and Iowa State University. Data are collected following the National Resource Inventory (NRI) protocol (National Resources Inventory 2016).[NM359] Lotic AIM Database: National Lotic AIM Database formally known as AquADat contains all field data and calculated indicators from 2013 onward. LUP: Land Use Plans, also known as Resource Management Plans (RMPs), that form the basis for every action and approved use on BLM-managed[KA360] lands. Management Objective (Management Goal): Broad goals or desired outcomes land managers are trying to achieve with land management. Management objectives and goals provide the context for why monitoring information is needed and how it will be used. Often, these are derived from planning documents and policy. Examples include maintaining forage production for livestock or high-quality habitat for big game animals. Master Sample: A large number of pre-selected, random sample locations from which project-level designs can be selected. Across the western U.S (12 states), terrestrial AIM master sample locations consist of 1 point per 35 hectares, and aquatic AIM master sample locations are 1 point per 0.5 km of stream length. These points can be used for comparable, complementary monitoring among separate monitoring organizations and across geographic scales. The Master Sample retains the principles of Randomization and Spatial Balance. Further reading: Larsen, D.P., A.R. Olsen, and D.L. Stevens. 2008. Using a master sample to integrate stream monitoring programs. JABES 13: 243-254.[NAM361][CN362][DCJ363] Monitoring Design Worksheet: A step-by-step template to document and plan an AIM monitoring effort. This worksheet serves many purposes including documenting decisions and reasons for completing monitoring, providing the necessary information for drawing sample points, and completing analyses once data are collected. Monitoring Objective: Quantitative statements that provide a means of evaluating whether management objectives or goals were achieved. Monitoring objectives should be specific, quantifiable, and attainable based on available resources and the sensitivity of the methods. At a minimum, monitoring objectives should include: 1. indicator; 2. benchmark for the indicator; 3. a time frame for evaluating the indicator, and 4. the reporting unit(s) over which the monitoring results will be reported. If making inference to a broader amount of resource (i.e., beyond the individual site scale) is pertinent to an objective, be sure to include the proportion of the resource that is desired to achieve certain conditions (i.e., benchmarks) and a confidence level in the objective. NAMC: The National Aquatic Monitoring Center (NAMC) is a joint venture between the BLM and Utah State University. The mission of NAMC is to foster and support scientifically sound aquatic monitoring programs on public lands. NAMC plays a large role in leading AIM monitoring efforts for rivers and streams.[CN364] The National AIM team: [RV365][NM366][DCJ367][RV368][CN369][YJ370][RV371]The AIM team supporting national implementation of AIM data collection, AIM database stewardship, and data use. The National AIM Team is composed of BLM staff at the NOC and Headquarters, and science partner staff at the USDA’s Jornada Station, CSU’s CNHP, and USU’s NAMC. BLM staff on the National AIM Team are largely housed in the Division of Resource Services (DRS) at the NOC. The DRS provides a technical interface between national policy and field operations through scientific and specialized products, resource data stewardship, and technical program support. National Hydrography Dataset (NHD): The NHD is a national geospatial dataset that represents surface water on the landscape. The NHDPlus medium resolution (1:100,000 scale) [DCJ372][CN373]is broken into stream segments each of which is associated with several attributes including the Strahler Stream Order, and whether the segment has been designated as perennial, intermittent, an artificial path, etc.is broken into stream segments each of which is associated with several attributes including the Strahler Stream Order, and whether the segment has been designated as perennial, intermittent, an artificial path, etc. High resolution 1:24,000 scale. Natural Resources Conservation Service: The USDA’s primary private lands conservation agency who generate, manages and shares data, technology and standards that help partners and policymakers informed by objective, reliable science. NOC: National Operations Center, a BLM center to provide operational and technical program support to BLM State, District, and Field offices as well as collaborators. Objective: a formal statement detailing a desired outcome of a project.[CN374] Oversample Points: E[PJ375][NAM376][DCJ377][CN378]xtra sample points which are selected at the time of the base sample draw. These points are used to supplement the base points when a base point is rejected or not sampled (see Final Designation). These are points to account for failures/rejections of base points to ensure we meet sample sizes Panel: A set of sample points that have the same revisit pattern across years. For example, an AIM design might be divided into 5 panels each one visited in a different year. All points within a single panel visited in 2017, would then be visited in 2022, 2027, and so on. The points visited 2017 through 2022 together make up the entire sample design. Percent (Proportion) Achieving Desired Conditions: The desired percentage of a resource with one or more indicator values that meet benchmark value(s). For instance, a desired percentage may be (80%) of the landscape with <20% bare ground, or 80% of sage-grouse summer habitat scored as suitable (based on multiple indicators). Percentages are derived from weights (see weight definition) of monitoring points or plots, where a point or plot weight indicates the extent of the resource represented by a point or plot. Percentiles of Regional Reference: An approach to setting benchmarks that uses reference sites [CN379]or points grouped by a landscape classification schema (e.g., ecoregions) to create a distribution of reference site indicator values. Benchmarks can then be set by assuming that sites in reference condition should fall within certain percentiles of the reference site distribution of a similar physiographic region. For example, the 90th and 70th percentiles of reference site floodplain connectivity values for the Colorado Plateau can be used to separate “major departure,” “moderate departure,” and “minimal departure” from reference conditions, respectively. For Lotic AIM, this approach can be used for indicators that lack models to compute predicted natural conditions. For terrestrial AIM, this approach is dependent on identifying and establishing a group of regional reference points. Physiographic Properties: Physical characteristics of a landscape that can be used to understand the potential of that landscape. These properties can be used as supplemental information, or covariates[CN380], for interpreting indicators. Slope, aspect, landform, and soil type are all physiographic properties. Population: The entire “universe” to which the results of sampling apply. The population is defined by many factors; the area of interest, objectives and constraints. Project Area: Describes the broadest outline of a project. Usually, the boundary of a field office, district office, or other administrative boundary. A project area contains the target population (e.g., BLM land within a field office boundary). See also Study Area.  Predicted Natural Conditions: An approach to setting benchmarks where the conditions expected to occur at a plot or reach in the absence of anthropogenic impairment are derived from empirical models. Such models use geospatial predictors (e.g., soil, climate, and topographic attributes) to account for natural environmental gradients. Observed field values are compared to potential natural indicator values and any deviation is assumed to result from anthropogenic impacts. This approach is advantageous because it provides spatially explicit predictions of expected conditions with known levels of accuracy and precision. Due to data limitations and the current state of the scientific literature, this approach is only available for a few lotic AIM indicator[PJ381][CN382]s. Quality assurance: proactive process employed to maintain data integrity and is a continuous effort to prevent (e.g., training, calibration, proper technique), detect (e.g., on-plot data review, client-side data validation), and correct measurement errors (e.g., readjustments in response to data review). Quality control: reactive process to detect measurement errors after the data collection process is complete. Reporting: communicating the results of monitoring data analysis in a manner that can be used to address management goals or as part of the adaptive management process. [DCJ383][CN384] Reporting Unit: A subset of the study area where information, such as indicator means and confidence intervals, is needed. A study area can have various reporting units[DCJ385][CN386]. Knowing the units ahead of time helps ensure adequate sampling. Reporting units may be different than stratification. Watersheds, allotments, and Greater Sage-grouse habitat units are all examples of reporting units.[DCJ387][CN388] Sample Design: Provides information on the target and final sample sizes, strata definitions and the sample selection methodology[CN389]. This term can be used in the past interchangeable with “sample plan”, “survey design”, “sampling plan” or “sampling design”.[PJ390][CN391][YJ392] In AIM the details of the sample design are covered in the Monitoring Design Worksheet. Sample Frame: A representation of the target population. The sample frame is often a geospatial feature (e.g., SMA layer, NHD, wetland mapping), but it can also be some list of the element of interest (e.g., BLM acre, wetland, or stream reach).[DCJ393][CN394] Sample Point (Reach or Plot): Location where monitoring information has been collected or data collection is planned. For terrestrial and Riparian & Wetland AIM, this is a plot. For lotic AIM, this is a stream reach. In some documents, the phrase sample point is used to refer to both. Sample size: The number of points or plots in the target population that need to be sampled within a stratum to ensure a desired level of precision and accuracy for data analysis. The sample size across the study area is a function of several factors: 1) existing or legacy monitoring information, 2) statistical considerations (e.g., what analyses do I need, what is my desired confidence level and confidence interval), and 3) funding and personnel limitations (e.g., how many points per year can I accomplish). The sample size may influence the types of analyses that can be performed and the statistical uncertainty of the results. Sampling: using selected members to estimate attributes of a larger population. Sampled population: The portion of the target population that was actually sampled. [PJ395][NAM396][DCJ397][CN398] Spatially Balanced Sampling: samples are evenly spaced across study area and ordered to maximize spatial dispersion of any sequence of units. Status: a measured indicator value or range of values. Strahler Stream Order: A hierarchical numeric system used to classify stream size. Stream size as determined by this method is used in most if not all lotic designs as a stratum. First order streams are small headwater streams. When two first order stream come together a second order stream is formed, when two second order streams come together a third order stream is formed, and so on. Two different order streams (e.g., first and second) coming together do not create a higher order stream (e.g., third), the stream below the confluence of the two different orders will remain the same order as the larger order stream (e.g., second order). Common groupings of stream orders for lotic AIM strata are SS- Small streams (1-2 order streams), LS- Large streams (3-4 order streams), and RV-Rivers (5+ order streams). Strata: Subdivisions of the study area used to divide up sampling efforts. Strata can be used to ensure adequate sample sizes for parts of the study area which are of particular management concern or may be used to increase precision when extrapolating data over large areas. Strata can be defined as relatively uniform parts of the landscape (e.g., flood basin or hill summit) with similarities or can be general areas that need adequate sample sizes (e.g., sage grouse habitat, streams with T&E fish species).[CN399][CN400] Stratification: Stratification refers to dividing a population or study area up into sub-groups or subunits called strata for the purposes of sampling or data analysis. Reason to stratify: 1) variability in indicators is different across types of land; 2) ensure different types of land or uncommon portions of a study area [DCJ401]are adequately represented in the sample population; 3) to deal with differences in land potential[CN402][KA403]. Examples of strata include biophysical settings (see BpS), stream order (see Strahler stream order), management unit boundary, and ecological sites. Supplemental Design: Additional points that were drawn for a pre-existing design because we used all pre-existing points and have not yet met needed sample sizes or completed monitoring objectives.[DCJ404][CN405] Stressor: Environmental or ecological stressors are thought of as pressures or dynamics that impact ecosystem components or processes caused by human and associated activities.[LTAC406] Supplemental Method: a measurable ecosystem component that is specific to a given ecosystem, land use, or management objective. There are not standardized methods, training, or data manage[DCJ407][CN408]ment processes for supplemental methods but, where desired, they can be sampled along with AIM core and contingent methods. Study Area: Defines the extent of the population and is the maximum area to draw conclusions about. See Project Area.[DCJ409][CN410] Target population: Refers to the resource to be described. In statistical surveys, the target population refers to the group of individuals that one seeks to make inference to. Sample points (see reach or plots) are selected from within the population. The definition of the target population should contain specific information including resource of interest, its spatial extent, its ownership status, and its size. The definition should be specific enough that an individual could determine whether a sample point is part of the target population. In some cases, membership in the target population might be determined after data have been collected at the sample point (e.g., sage-grouse seasonal habitat). Examples of the target population include: all BLM lands within a reporting unit, all perennial, wadeable streams on BLM land, and sage grouse habitat on BLM lands. [CN411] TerrADat: Terrestrial AIM Data (TerrADat) is a national Terrestrial monitoring database. As of 2022, TerrADat is a multi-scaled dataset built around the state level, [DCJ412]that along with LMF make up the Terrestrial AIM Database. Trend: the direction of change in ecological status or resource value rating observed over time. Weight: A weight is the area (in acres or hectares), or length (in stream kilometers) represented by an individual sample point. In general, point weights are equal to the total extent or amount of a monitored resource divided by the number of monitoring points. [DCJ413][CN414]Weights are used to generate statistical estimates of resource status or condition across the landscape. Specifically, the weight is used to adjust the relative influence each point has on the final estimates; points with larger weights have more influence, and points with smaller weights have less. The weight of each point depends on the design and how it was implemented (see final designations) as well as the reporting area of interest. 8.0 Literature Cited[KA415][KEJ416][KA417] Bureau of Land Management. 2001. Rangeland Health Standards, BLM Handbook H-4180-1. Department of the Interior, Bureau of Land Management. Bureau of Land Management. 2005. Land Use Planning Handbook, BLM Handbook H-1601-1. Department of the Interior, Bureau of Land Management. Bureau of Land Management. 2008b. Final Report for the Analysis of Inventory and Monitoring Activities in BLM. Department of the Interior, Bureau of Land Management. Bureau of Land Management and Office of the Solicitor (eds.). 2001. The Federal Land Policy and Management Act, as amended. Department of the Interior, Bureau of Land Management, Office of Public Affairs, Washington, DC. www.blm.gov/flpma/ FLPMA.pdf. Bureau of Land Management. 2015. AIM National Aquatic Monitoring Framework: Introducing the Framework and Indicators for Lotic Systems. Technical Reference 1735-1. U.S. Department of the Interior, Bureau of Land Management, National Operations Center, Denver, CO. Bureau of Land Management. 2021. AIM National Aquatic Monitoring Framework: Field Protocol for Wadeable Lotic Systems. Tech Ref 1735-2, Version 2. U.S. Department of the Interior, Bureau of Land Management, National Operations Center, Denver, CO. Bureau of Land Management. 2022. DRAFT: AIM National Aquatic Monitoring Framework: Field Protocol for Lentic Riparian and Wetland Systems. Tech Ref 1735-X. U.S. Department of the Interior, Bureau of Land Management, National Operations Center, Denver, CO.[KA418][RV419] Bureau of Land Management. 2022. BLM’s Lotic Assessment, Inventory, and Monitoring (AIM) 2022 Field Season: Evaluation and Design Management Protocol. Version 5.0. Bureau of Land Management, National Operations Center, Denver, CO. https://www.blm.gov/sites/default/files/docs/2022-01/LoticEvalAndDesignManagementProtocol_2022v5.0.pdf. Bureau of Land Management. 2022. BLM’s Lotic Assessment, Inventory, and Monitoring (AIM) 2022 Field Season: Technology and Applications Manual. Version 2.1. Bureau of Land Management, National Operations Center, Denver, CO. https://www.blm.gov/sites/default/files/docs/2022-03/Lotic_TechnologyAndApplicationsManual_2022.pdf. Bureau of Land Management. 2022. BLM’s Lotic Assessment, Inventory, and Monitoring (AIM) 2022 Field Season: Data Management and Quality Assurance and Quality Control Protocol. Version 5.0. Bureau of Land Management, National Operations Center, Denver, CO. https://www.blm.gov/sites/default/files/docs/2022-03/Lotic_DataManagementProtocol_2022.pdf. Bureau of Land Management. 2022. BLM’s Riparian and Wetland Assessment, Inventory, and Monitoring (AIM) 2022 Field Season: Data Management, Quality Assurance, and Quality Control Protocol. Version 2.0. Bureau of Land Management, National Operations Center, Denver, CO. https://www.blm.gov/sites/default/files/docs/2022-08/R%26W_AIM_DataManagementProtocol_2022.pdf. Bureau of Land Management. 2022. BLM’s Riparian and Wetland Assessment, Inventory, and Monitoring (AIM) 2022 Field Season: Design Management and Plot Evaluation Protocol. Version 1.0. Bureau of Land Management, National Operations Center, Denver, CO. https://www.blm.gov/sites/blm.gov/files/docs/2022-05/RiparianWetlandAIM_Design_EvaluationProtocol.pdf. Bureau of Land Management. 2022. BLM’s Terrestrial Assessment, Inventory, and Monitoring (AIM): Data Management Protocol. Version 5.0. Bureau of Land Management, National Operations Center, Denver, CO. https://www.blm.gov/sites/blm.gov/files/docs/2022-04/Data%20Management%20Protocol%20V5_0.pdf. Code of Federal Regulations. 2011. Grazing Administration—Exclusive of Alaska. Title 43, Part 4100. e-CFR. http://ecfr.gpoaccess.gov/cgi/t/text/text-idx?c=ecfr&tpl=%2Findex. tpl. Caudle, D., J. DiBenedetto, M. Karl, H. Sanchez, C. Talbot. 2013. Interagency ecological site handbook for rangelands. US Department of the Interior, Bureau of Land Management, Washington, DC, USA. Dickard, M., M. Gonzalez, W. Elmore, S. Leonard, D. Smith, S. Smith, J. Staats, P. Summers, D. Weixelman, S. Wyman. 2015. Riparian area management: Proper functioning condition assessment for lotic areas. Technical Reference 1737-15. U.S. Department of the Interior, Bureau of Land Management, National Operations Center, Denver, CO. Elzinga, C.L., D.W. Salzer, J.W. Willoughby. 1998. Measuring and Monitoring Plant Populations. Bureau of Land Management Technical Reference 1730-1. Denver, CO: BLM National Business Center. Gonzalez, M.A. and S.J. Smith. 2020. Riparian area management: Proper functioning condition assessment for lentic areas. 3rd ed. Technical Reference 1737-16. U.S. Department of the Interior, Bureau of Land Management, National Operations Center, Denver, Colorado. Hawkins, C.P., J.R. Olson, J.R. Hill. 2010. The Reference Condition: Predicting Benchmarks for Ecological and Water Quality Assessments. Journal of the North American Benthological Society 29(1): 312-343. Herrick, J.E.; J.W. Van Zee; S. McCord; E. Courtright; J. Karl; L.M Burkett. 2017. Monitoring Manual for Grassland, Shrubland, and Savanna Ecosystems, Volume I: Core Methods. Herrick, J.E; J.W. Van Zee; K.M. Havstad; L.M. Burkett; W.G. Whitford. 2009. Monitoring Manual for Grassland, Shrubland, and Savanna Ecosystems, Volume II: Design, Supplementary Methods and Interpretation. Kachergis, E., N. Lepak, M. Karl, S. Miller, and Z. Davidson. 2020. Guide to Using AIM and LMF Data in Land Health Evaluations and Authorizations of Permitted Uses. Tech Note 453. U.S. Department of the Interior, Bureau of Land Management, National Operations Center, Denver, CO. Kachergis, E., S.W. Miller, S.E. McCord, M. Dickard, S. Savage, L.V. Reynolds, N. Lepak, C. Dietrich, A. Green, A. Nafus, K. Prentice, Z. Davidson. 2022. Adaptive monitoring for multiscale land management: Lessons learned from the Assessment, Inventory, and Monitoring (AIM) principles. Rangelands, 44(1): 50-63. Karl, J.W., J.E. Herrick. 2010. Monitoring and Assessment Based on Ecological Sites. Rangelands, 32(6): 60-64. Larsen, D.P., A.R. Olsen, and D.L. Stevens. 2008. Using a master sample to integrate stream monitoring programs. JABES 13: 243-254. MacKinnon, W.C., J.W. Karl, G.R. Toevs, J.J. Taylor, M. Karl, C.S. Spurrier, and J.E. Herrick. 2011. BLM Core Terrestrial Indicators and Methods. Tech Note 440. U.S. Department of the Interior, Bureau of Land Management, National Operations Center, Denver, CO. Pellant, M., P.L. Shaver, D.A. Pyke, J.E. Herrick, N. Lepak, G. Riegel, E. Kachergis, B.A. Newingham, D. Toledo, and F.E. Busby. 2020. Interpreting Indicators of Rangeland Health, Version 5. Tech Ref 1734-6. U.S. Department of the Interior, Bureau of Land Management, National Operations Center, Denver, CO. Stiver, S.J., E.T. Rinkes, D.E. Naugle, P.D. Makela, D.A. Nance, and J.W. Karl, eds. 2015. Sage-Grouse Habitat Assessment Framework: A Multiscale Assessment Tool. Technical Reference 6710-1. Bureau of Land Management and Western Association of Fish and Wildlife Agencies, Denver, Colorado. Stoddard, J.L., D.P. Larsen, C.P. Hawkins, R.K. Johnson, R.H. Norris. 2006. Setting expectations for the ecological condition of streams: the concept of reference condition. Ecological Applications 16(4):1267-1276. Toevs, G.R., J.J. Taylor, C.S. Spurrier, C. MacKinnon, M.R. Bobo. 2011. Assessment, Inventory, and Monitoring Strategy: For Integrated Renewable Resources Management. US Department of Interior, Bureau of Land Management, National Operations Center. https://www.blm.gov/sites/blm.gov/files/uploads/IB2012-080_att1.pdf https://eplanning.blm.gov/public_projects/lup/31652/63338/68680/IDMT_ARMPA_web.pdf (page 13 of this document) 9.0 Tables Table 1: AIM Related Policy Summary – How AIM Supports the BLM Mission AIM data represent one common dataset for upland, stream and river, and riparian and wetland resources that can be used for multiple purposes. The following table shows the current status of AIM integration and future potential for AIM integration with BLM programs. Some programs have developed policy, dedicated funds and/or provided technical guidance for AIM efforts, which has significantly increased AIM data collection and application. Other programs have done less yet have great potential.   Please contact a Core Team member for further information: • Emily Kachergis, Headquarters National AIM Team Lead (ekachergis@blm.gov) • Aleta Nafus, Terrestrial AIM Team Lead (anafus@blm.gov) • Nicole Cappuccio, Lotic AIM Team Lead (ncappuccio@blm.gov) • Lindsay Reynolds, Riparian & Wetland AIM Team Lead (lreynolds@blm.gov)   Program or Area   Status of AIM Integration  Potential for AIM Integration   National Reporting  AIM data, along with related remote sensing maps, show the national condition of rangelands in several forthcoming national reports:  DOI Strategic Plan, Public Lands Statistics, the Renewable Resources and Planning Act of 1974, and several national EIS’s.   Scientific analyses of AIM data could address specific questions that BLM leadership has to inform large scale policy and/or funding decisions.   Land Use Plan (LUP) Planning and Plan  Effectiveness   The Draft LUP Handbook (see Planning Sharepoint) encourages use of standardized monitoring for planning and plan effectiveness, including AIM.  IM 2016-139 required use of AIM for LUP effectiveness monitoring, consistent with FLPMA and the LUP handbook. Nearly two-thirds of field offices are collecting terrestrial and/or lotic AIM data for LUP effectiveness, especially in states with sage-grouse subject to the sage-grouse plan amendments. National and state-based reports are informing LUP effectiveness and further planning efforts.  Additional states are completing LUP effectiveness reports using AIM data.  All field offices could collect AIM data, including the new Riparian & Wetland AIM. All field offices could also use results to report out on LUP effectiveness and to amend or revise plans, where needed.   Wild Horse and Burro   Some field offices are collecting and using AIM data for understanding land health in herd management areas to decide when a gather is needed. The 2014 National Academy of Sciences program review praised this use of AIM.  The Wild Horse and Burro Handbook requires use of land health information for management decisions, but application along with AIM appears inconsistent.    Policy and technical guidance could be developed to promote collection and use of AIM data for land health evaluations in Herd Management Areas, setting Appropriate Management Levels for populations, and justifying management actions. Technical guidance could build on Technical Note 453, Guide to Using AIM and LMF Data for Land Health Evaluations and Authorizations of Permitted Uses.  Funding is also likely needed to ensure that sufficient monitoring data are collected within HMA’s.  Rangeland   Management   Use of AIM data to understand conditions and streamline grazing permit renewals is becoming widespread, and are encouraged in program budget language. Technical Note 453, Guide to Using AIM and LMF Data for Land Health Evaluations and Authorizations of Permitted Uses, was co-developed with program technical experts.   Technical guidance or policy could be developed to standardize range monitoring implementation with AIM, as appropriate. AIM data could inform all grazing permit renewals (where data are available).  Use of remotely sensed indicator maps informed by AIM data could increase.   Post-fire treatment effectiveness (ESR)   AIM is required for fires >10,000 acres in program budget language.  Use of AIM for measuring and reporting ESR treatment effectiveness is widespread.   Additional technical guidance could be developed similar to the Fuels IM and Guidebook.  AIM could be required for smaller fires and/or ESR projects above a certain spending limit.    Fuels and Wildfire Management   FA-IM-2019-012 and associated technical guidance standardize fuels treatment effectiveness with AIM. Field offices in every state are collecting and using AIM data to evaluate fuels treatment effectiveness.     With improved implementation of the policy, AIM fuels treatment effectiveness efforts could expand greatly (1,000 or more monitoring plots per year across BLM).  Use of remotely sensed indicator maps informed by AIM data can help identify patterns of fuel loading and fire risk.   Aquatic Habitat Management   AIM is encouraged (where practical) as the primary means to assess water quality, habitat viability for species of management concern, invasive species, and riparian conditions (budget language).   AIM could be integrated with other monitoring tools to create a single, streamlined tool to be used for all aquatic assessment and monitoring.   Wildlife--Greater Sage-Grouse Habitat   HQ IM 2022-056 requires use of AIM data to inform greater sage-grouse habitat assessment. AIM data collection and application to assessments are widespread.  Use of available remote sensing products is also widespread.   A small number of additional field offices could begin AIM data collection and habitat assessments.   Wildlife--Game and Other Species Habitat   AIM integration is encouraged in program budget language. Some field offices are collecting and using AIM data to assess habitat conditions for other wildlife species, including fish, mule deer, amphibians and desert tortoise.   AIM could inform habitat management of most wildlife species, including at larger spatial scales such as migration corridors for big game. Use of remotely sensed indicator maps informed by AIM data could also inform wildlife and habitat management.   Energy and Minerals Development   A few field offices are collecting and using AIM data to evaluate reclamation effectiveness. A 2017 GAO Audit recommended expanded AIM integration. The USGS is leading an effort to provide technical guidelines for reclamation monitoring including AIM integration (the “Greenbook”). The Solar PEIS commits to using AIM as the monitoring approach and several field offices are doing so.    Reclamation effectiveness using AIM could expand to more offices and mineral types. Data could also be used to identify potential areas for well pads or high resource value areas to avoid, mitigation areas, rights-of-way corridors and appropriate reclamation objectives. An ongoing partnership with USGS for Surface Disturbance and Reclamation Tracking (SDARTT) provides an opportunity to standardize reclamation workflows which could include AIM data.      Mitigation  The mitigation handbook was reinstated in 2021. It recommends use of standardized methods such as AIM for measuring mitigation effectiveness.    AIM could be integrated with the implementation of this policy, including inclusion in monitoring plans in BLM-managed mitigation and encouraging third party mitigation partners to adopt AIM.    Recreation    The recreation program is interested in standardizing program data collection, using AIM principles as a framework to accomplish that.  National Scenic and Historic Trails  The National Scenic and Historic Trails program completed Technical References 6180-1 and 2 for monitoring, which recommend AIM methods for natural resource concerns in these areas.  Implementation of the new policy and guidance could be further integrated with and supported by the broader AIM program.     10.0 Figures[KA420] 11.0 Photos 12.0 Appendices[CN421] Appendix A: Roles and Responsibilities[KJ422][KA423] This appendix defines the roles and responsibilities of AIM practitioners which may include National AIM Team National AIM Program Lead (Headquarters) Responsible for: • Developing and maintaining up-to-date policies, objectives, priorities, and general procedures for assessment, inventory and monitoring of natural resources at a national level; • Developing agency budget guidance pertaining to assessment, inventory and monitoring of natural resources and recommending funding allocations to state offices and centers; • Monitoring AIM implementation expenditures and performance; • Coordinating with state office program leads and other national program leads to ensure consistent implementation of AIM related policies; • Providing technical expertise and appropriate resources across the BLM to ensure proper consideration and implementation of AIM related policies; • Coordinating with other Federal agencies, Tribal and State agencies, and national and international organizations on assessment, inventory and monitoring activities; • Working with the BLM’s National Operations Center and National Training Center to develop science initiatives, tools, and training materials relevant to the AIM program and policies. • Facilitating reviews of new and proposed legislation, regulations, and policies as needed to determine how they affect the policies and objectives of BLM relevant to assessment, inventory and monitoring. • Reviewing Resource Management Plans and associated documents. • Communicating with Division Chiefs about resource conditions and trends nationally to ensure they have current information. National Operations [NC424]Center (NOC)[KA425][DCJ426][NC427][NM428][NC429] and Partners (USDA-ARS Jornada, BLM/USU National Aquatic Monitoring Center, CSU Colorado Natural Heritage Program) [PLJ430][DCJ431] Responsible for:[KA432][RV433] • Providing technical support and expertise across the BLM and to partners to ensure consistent implementation of assessment, inventory and monitoring activities; • Developing and implementing training in cooperation with the National Training Center to support consistent data collection and to meet BLM workforce needs; • Developing, managing, and maintaining internal and external systems to electronically capture, manage, access, analyze, and report on assessment, inventory and monitoring data • Preparing, reviewing, and evaluating BLM technical references, user guides, technical notes, and other documents supporting the policies and objectives of BLM assessment, inventory and monitoring activities in coordination with national program leads. • Coordinate with Headquarters, regional, state, district and/or field office personnel to ensure that AIM implementation is successful (e.g., answer questions regarding the AIM strategy, monitoring plans, access to resources for projects, etc.)  • Select monitoring points based on field/district/regional/state needs (as captured in their Monitoring Design Worksheet)  • Support data analysis for management decision-making through applying statistical expertise  • Provide standard reporting templates and approaches for resource information needs across BLM [NM434] Regional Monitoring Coordinator or Other Regional Coordinator (as applicable)  • Logistical support for sage-grouse monitoring efforts across the Region [PLJ435][NAM436][NAM437][NC438] • Point of coordination for study design and analysis to ensure needs are met across state boundaries  • Coordinate with Geospatial Ecologist, Mitigation Coordinator, Sage Grouse Coordinator, and other leads in the Region as well as the AIM team at the NOC  • Provide data analysis support for States  • Support data analysis for management decision-making through applying statistical expertise    State Offices  State Monitoring Coordinator and/or State Lead [KA439] Responsible for: • Providing programmatic support to BLM personnel in the development and implementation of the AIM strategy within the state, for uplands, streams and rivers, and riparian and wetland riparian areas. • Overseeing implementation and reporting of AIM monitoring activities and policies within the state. Develop state level policies as needed to ensure program objectives are met. • Collaborating with natural resource and planning State Office program leads to ensure objectives of the AIM monitoring program are integrated in their respective programs. • Partnering with National Operations Center and Headquarters AIM staff in execution of the AIM strategy. • Maintaining cooperative working relationships with State and Federal agencies, universities, and local groups relative to the assessment, inventory, and monitoring of natural resources. • Recommending funding allocations that will best achieve the objectives of this policy and track expenditures to determine if the allocated funds have been appropriately expended. • Coordinating required training according to the protocol to keep field and district offices current on policies and direction changes. • Facilitating analysis and use of AIM data in decision-making and land use planning. • Communicating with State Directors, Deputy State Directors, and Branch Chiefs about resource conditions and trends to ensure they have the current information within the state. • Manage agreements/contracts to hire field crews for AIM monitoring data collection, and assist with crew hiring when necessary  • Facilitate communication among BLM offices and BLM staff at different levels (e.g., adjacent Field Offices; Project Leads and Field Office Managers) [NC440] • Organize and facilitate early season and end of season AIM crew Check-Ins  • Final QA and [NC441]QC of data, approve data and submit to National AIM Team [NC442] • Coordinate report preparation to support land health assessments, land health evaluations, authorizations, and decision making within the state  • Coordinate with Geospatial Ecologist, Mitigation Coordinator, Sage Grouse Coordinator, and other state leads  • Communicate with stakeholders and partners within the state and provide technical information when needed  • Provide data analysis support for District and Field Offices and/or if possible complete data analysis.    District/Field Offices  Project [NC443]Lead or District/Field Office Monitoring Coordinator, in conjunction with an IDT • Plan and coordinate monitoring efforts with other office resource leads based on multiple resource needs (e.g., Land Use Plan effectiveness, treatment effectiveness, wildlife habitat)  • Document monitoring objectives and plans in a Monitoring Design Worksheet in coordination with IDTs, the State Program Lead, and the National AIM Team (when designs will be provided by the National AIM Team)  • Establish [NC444]field crews (if needed, depending on contract/agreement or other arrangement)  • Assemble field equipment (if needed, depending on contract/a[KA445][NC446]greement or other arrangement) • Conduct field visits, post-training local orientation and calibration [NC447](lotic) • Provide local support and supervision to monitoring field crews throughout the field season  • Oversee field crew QA and QC of data and submit final dataset to State Office (terrestrial only) [PLJ448][KA449][NC450] • Interpret data and apply [NC451]monitoring information to management decisions (e.g., evaluate land health standards, sage grouse habitat condition, Land Use Plan effectiveness)  Crew manager or crew lead potential responsibilities:  • Office Evaluating Points   • Trip planning [NC452] • Coordination with field office staff  • Final [NC453]data QC[KA454][NC455]  • Participate in AIM core methods training  • Calibrate on data collection methods  • Organize field trips / hitch plans  • Collect field data following standardized methods and follow quality assurance procedures  • Communicate with supervisory staff regularly to facilitate safety oversight  • Communicate with Project Lead regarding data collection and data submission processes  • Properly process, store, and document samples collected in the field  • Perform data quality control checks [KA456][DCJ457] • Maintain field equipment and vehicle    Data Collectors [KA458] • Typically, field crews are only hired for the duration of the field season, and sometimes a few weeks for pre-season prep and post-season clean-up. If crew leads are asked to stay on for a longer duration, the list below should be amended to reflect the additional roles and responsibilities assumed by the crew lead.  • Participate in AIM core methods training  • Calibrate on data collection methods  • Collect field data following standardized methods and follow quality assurance procedures  • Communicate with Project Lead regarding data collection and data submission processes  • Properly process, store, and document samples collected in the field  • Perform data quality control checks [KA459][DCJ460] • Maintain field equipment and vehicle [NC461] • Pre-Season prep as needed  • Field data collection and QC [NC462] • Post-trip data submission and QC  • Post-season finalization  [SR463]Appendix B. Setting Benchmarks[NM464][KJ465][YJ466][ML467] Setting benchmarks for indicators is a necessary but often challenging step in defining monitoring objectives. This process should be completed by the IDT concurrently when planning an AIM monitoring effort and is included as part of the Monitoring Design Worksheet. Setting benchmarks during the planning phase helps ensure data can be used efficiently for land management decisions. Benchmarks should be based on knowledge of the potential of the resource and conditions needed to sustain desired ecosystem structure, function, and services. For example, the BLM has set a number of benchmarks for sagebrush cover and other vegetation characteristics to maintain habitat for the Greater sage-grouse as part of the Resource Management Plan amendment process (e.g., Stiver et al. 2015). These benchmarks were based on peer reviewed research demonstrating the beneficial conditions for sage-grouse. Another common approach is to use the conditions observed at individual or groups of reference sites to set benchmarks. For example, the EPA has partnered with BLM and other agencies to identify a network of “least-disturbed” sites. Benchmarks are then defined in terms of departure of sampled sites from the range of indicator values across a network of “reference” sites. Networks of reference sites can be used to account for natural variability among sites and through time (reviewed in Hawkins et al. 2010). Benchmarks will often vary across the landscape based on natural environmental gradients, therefore variability in ecological potential should be considered when setting benchmarks. The goal is to ensure we are comparing assessed sites to those with similar potential. Thus, similar biophysical areas with similar ecological potential should have similar benchmarks. In contrast, areas with large differences in ecological potential may have large differences in benchmarks. There are numerous approaches for accomplishing this that range from landscape classification systems to modeling continuous ecological gradients. Ecological site descriptions (Caudle et al. 2013), grouping least-disturbed sites by ecoregion or stream size (Hughes et al. 1986; Hawkins et al. 2000), or grouping sites by Rosgen stream type are examples of landscape classification. Site-specific empirical models (e.g., Hill et al. 2013; Olson and Hawkins, 2012) can help avoid the need for categorizing landscapes into discrete categories by modeling continuous environmental gradients, with each assessed site capable of having different potential. Benchmarks may also vary based on management objectives. For example, a post-treatment objective for an Emergency Stabilization treatment may differ from an objective for a land health standard that is evaluated on an ecological site within a grazing allotment. Within BLM, there may be specific policy guidance that informs objectives (see discussion below). An alternative to employing varying benchmarks based on management objectives is to vary the proportion of the landscape required to meet the benchmark. This approach enables land managers to strive for a consistent set of conditions but make management decisions about the percentage of resources that meet those conditions based on their management objectives. For example, a larger proportion of the landscape may be required to meet benchmarks in a Wilderness Study Area compared to a motorized recreation area. Approaches for Setting Benchmarks The key to setting benchmarks is to clearly document and justify the approach taken. Below, is an overview of common approaches to setting benchmarks (Figure 2). These approaches vary in their potential for bias, our ability to quantify bias, ease of communication, applicability to the management question, and availability in geographic region. However, all can be defensible if used appropriately and the reasoning is well-documented. Often, a combination of these approaches is required to cover different monitoring indicators or to provide multiple lines of evidence and increase confidence in the benchmark. Best professional judgment, including review by an IDT, should inform any benchmark setting approach. Policy Specific benchmark values are sometimes set in policy and/or decision documents (e.g., Biological Opinions, Resource Management Plan amendments for Greater sage-grouse). Generally, these benchmark values are based on one or more of the other information sources. Rather than specific benchmark values, policy documents may instead outline an approach to use to set benchmarks (e.g., reference conditions in the Land Health Manual and Handbook, site stabilization criteria for Emergency Stabilization and Rehabilitation treatments). All policy recommendations should be followed as these represent legal commitments by the BLM. More examples include State Air or Water Quality Standards, Resource Management Plan objectives, or Allotment Management Plan objectives. A specific example is the Greater sage-grouse habitat objectives in the Idaho and Southwestern Montana Greater Sage-Grouse Approved Resource Management Plan Amendment (Table 2-2 on page 2-5). Best practices for the implementation of policy benchmarks: • Ensure policy is current. • Ensure policy is applicable to the geographic area of interest. • Ensure that any new science that has emerged since policy was established is employed to inform or refine benchmarks. Reference Conditions Reference conditions are thought to provide important context in land management because they represent a state where ecological processes and functions are maintained (e.g., IIRH Tech Ref, BLM Land Health Handbook). Thus, reference conditions can be used to characterize expected natural conditions for assessed sites, from which we can set benchmarks for land management (reviewed in Stoddard et al. 2006). The “reference” condition can be defined in a variety of ways, from historic conditions (e.g., pre-European settlement in North America) to least-disturbed conditions representing the best available conditions found in the present-day landscape under natural disturbance regimes. Recognizing the difficulty of characterizing historic conditions, a practical approach to determining reference conditions is to identify least-disturbed conditions (i.e., minimal human impacts). Such conditions can be identified by screening landscapes for areas where 1) ecological processes are functioning (as inferred from structural/functional indicators; Pellant et al. 2020; BLM Land Health Handbook) and/or 2) surface disturbances are below thresholds thought to impact ecosystem structure and function (e.g., < 1 km/km2 road density, < 3% agricultural land use, certain distances away from water sources where livestock grazing pressure is light to moderate; e.g., Landsberg et al. 2003; Miller et al. 2016; Ode et al. 2016). The characterization of least-disturbed conditions can vary through space and time as human impacts are distributed unevenly, change through time, and have differing impacts under different physiographic conditions. Similarly, the criteria used to identify least-disturbed conditions can vary among indicators. Below we highlight several different ways benchmarks can be developed from a group of reference sites. We largely focus on the use of multiple reference sites to characterize the range of likely conditions. This “natural range of variability” of reference conditions acknowledges the dynamic nature of ecosystems resulting from natural disturbance events such as drought, floods, disease, fire, mass wasting events, and grazing by native ungulates.   Predicted natural conditions (available for several lotic indicators; applies to terrestrial also but often not available): Field data from a network of reference sites can be combined with geospatial data to model reference conditions across the landscape. These models can then be used to predict reference conditions for sampled sites. In this approach, benchmarks are set based on the site-specific predictions and associated error of the model. Models can be advantageous because they account for gradients in resource potential, make site-specific predictions, and have known levels of error in their predictions. Models have been developed that predict reference conditions for lotic macroinvertebrates, nutrients, stream temperature, and some instream habitat variables for selected geographic regions (e.g., Hill et al. 2013, Olson and Hawkins 2012); similar models for terrestrial ecosystems are in development. Best practices for the implementation of benchmarks based on predictive models: • Review a list of models available from the National AIM Team and consult with other specialists in the state to determine if other models exist that could be applicable. Then, schedule a meeting with the National AIM Team to discuss pros and cons of each available model. • When choosing models, consider the following: o Ensure the compatibility among field methods used in the sampling of reference site networks with local AIM monitoring data. o Understand how reference conditions were defined and used to develop a given model o Ensure that reference data used to create a specific model are applicable to the geographic area of interest. o Consider the quality of the model benchmarks: how well can the model predict reference conditions and how large is the model error used to set the benchmarks? • When reviewing resulting benchmarks from a model: o Consider if the model was applicable to a given site (the National Aquatic Monitoring Center (NAMC) provides output to assist with this for lotic models). o Think critically about the degree of departure from reference condition that is allowable while still maintaining ecosystem structure and function. o Consider the potential for the model to under- or over-predict in certain conditions. Model accuracy and precision, which can vary across the landscape, may influence how conservative your benchmarks are relative to your management objectives (e.g. a large, hot, dry, low elevation river site may not be adequately represented by the reference sites included in the models and may also be likely to have fewer macroinvertebrate taxa. Therefore, the model may overpredict the number of macroinvertebrate taxa for the site and lead to poor scores when these sites may actually be in good condition. Whether or not you use those in a specific analysis will depend on your management question and how confident you need to be in your results)[SM468][KA469][ML470] [CN471][AN472][CN473][YJ474][SM475][ML476] [NM477] Percentiles/range of variability among reference site networks (broadly available for lotic ecosystems; applies to terrestrial also but often not available): Data collected at networks of reference sites can be used to develop frequency distributions of reference site indicator values. The distributions of indicator values characterize the natural range of variability expected to occur in a region. The percentiles of the resulting distributions can be used to set benchmarks, against which monitoring data can be compared and deviations from reference conditions identified. The main difference in this approach and a modeled approach is that rather than modeling reference conditions continuously across the landscape, reference site networks are typically grouped by categorical variables such as physiographic boundaries (e.g., level III ecoregions; Rosgen stream types; ecological sites) to account for differences in reference site potential and subsequent frequency distributions resulting from factors such as climate and topography. For example, the 90th and 70th percentiles of reference site fine sediment values for streams in the Colorado Plateau ecoregion can be used as benchmarks to classify the condition of a monitoring site as having “major”, “moderate”, or “minimal” departure from reference conditions, respectively. In other words, a site would be categorized as having major departure from reference conditions if the fine sediment value for a sample site is greater than that observed among 90% of reference sites in the Colorado Plateau ecoregion. In contrast, the site would be categorized as moderate departure if the site is less than 90% of reference sites but greater than 70% and minimal departure if less than 70% of reference sites.[RV478][CN479][KJ480][YJ481][KA482][ML483][ML484][YJ485] This approach does not have known levels of accuracy and precision, which lessens our understanding of if we may be over or under protecting a resource compared to a model approach. Best practices for the implementation of benchmarks based on percentiles/range of variability among reference site networks: • Ensure the compatibility among field methods used in the sampling of reference site networks with previous monitoring data • Understand how reference conditions were defined and used to develop indicator distributions • Consider samples sizes greater than 30, which are optimal for developing representative distributions • Separate reference sites into ecologically similar groups to help account for natural variability, but balance with meeting minimal sample sizes • Ensure that reference data used to build distributions are applicable to the geographic area of interest • Consider indicator distributions: highly skewed or narrow reference distributions (e.g., very small interquartile range or difference in indicator values between the 25th and 75th percentiles), or distributions with upper or lower limits, may need to be handled differently • Think critically about the degree of departure from reference that is allowable while still maintaining ecosystem structure and function Ecological Site Descriptions (ESDs) or other land potential-based conceptual models (e.g., habitat types): Ecological Site Descriptions (ESDs) provide information about different types of land, including their potential or reference condition, that can be used to set benchmarks. The interagency manual defines an ecological site as “a conceptual division of the landscape that is defined as a distinctive kind of land based on recurring soil, landform, geological, and climate characteristics that differs from other kinds of land in its ability to produce distinctive kinds and amounts of vegetation and in its ability to respond similarly to management actions and natural disturbances” (Caudle et al. 2013). An underpinning assumption is that soils, climate, geomorphology, and plant species can be grouped with sufficient precision to inform reference conditions and associated changes. ESDs are conceptually similar to the previous approaches but differ in that the development process relies more on professional judgment. They are developed by the USDA-NRCS and other partners using a variety of information sources, including professional judgment, peer-reviewed studies, and field data. Best practices for the implementation of benchmarks based on Ecological Site Descriptions: • Given high variation in quality, be sure to consider the ESD itself as well as the information that it is based on. • Based on the ESD’s state-and-transition model of ecosystem dynamics, the reference state (or the appropriate community within it given recent disturbance) is frequently used to set benchmarks.[SM486] • When available, reference sheets from Interpreting Indicators of Rangeland Health (Pellant et al. 2020) are ideal sources of benchmarks. • Ensure compatibility among field methods used in ESD and reference sheet documentation with AIM data and be ready to adjust benchmarks accordingly to address incompatibility. Contact the National AIM Team for past research on how different methodologies compare. • More information about the conceptual underpinnings of ESDs and their treatment of reference conditions is available from Caudle et al. 2013 and Pellant et al. 2020. Current conditions from existing AIM and other data[RV487][CN488][KJ489]: [RV490][CN491][YJ492][ML493][LC494]Existing monitoring data, whether from reference sites or not, can provide an additional line of evidence for setting benchmarks. This type of information is especially useful when other benchmark information is lacking. While previously described approaches largely represent off the shelf products developed by others, this approach is guided by the end user and requires considerable discretion. There are two broad steps to this approach. First, select a set of sites to use as a “reference set.” This will include screening sites by specific attributes (e.g., burned vs. unburned; percent disturbance in the watershed) to identify best available/least disturbed conditions and ensure there is sound reasoning to expect that they are in good condition and represent a management target. When screening potential sites, it is advisable to use a different or at least a much broader set of sites than the ones in the area of interest for which an assessment of condition and trend is sought (see best practices). Second, decide what fraction of sites is likely in “reference” or desired condition, considering the monitoring design used to select the sites and any site screening. The benchmark will correspond to the value those sites have. A visualization of the data using histograms or box plots (e.g., Fig. 2) will be essential. For example, the EPA recommends using 5th or 25th percentile of regional nutrient concentrations in streams as a benchmark to differentiate acceptable vs. unacceptable nutrient values if working with a network of non-reference sites (US EPA 2000). Keep in mind that which quantile is used will depend on whether the indicator increases or decreases with degradation (e.g., in sagebrush steppe, degradation is associated with decreases in perennial grasses and increases in bare ground). Some indicators can be both too high and too low (e.g., litter, pH). This approach can be very informative, especially in combination with other sources of information. See example from northeastern California (Figure 2; more details in Johnson et al. 2017). Figure 2. Example histograms of bare ground for all unburned terrestrial AIM plots in an ecoregion, split by type of land. This information can be helpful for setting benchmarks. For example, if it was decided that the lower 25% percentile of bare ground values represent desired conditions, the benchmark for clayey areas would be about 5% bare ground. The benchmark for sandy areas would be about 15% bare ground. Other information sources like professional judgment or peer reviewed articles should be used to validate and/or justify adjustments to these benchmarks. More details on this example can be found in Johnson et al. 2017. Best practices for the implementation of benchmarks based on AIM data or data collected by other entities: • Consider whether to err on the side of “over-protecting” resources (e.g., employing benchmarks that result in more conservative management) vs. “under-protection” (employed fewer conservative benchmarks). • Ensure that reference data used to build distributions are applicable to the geographic area of interest. • Start with a set of sites that is different or at least covers a much broader than the area where an assessment of condition and trend is sought to avoid circular reasoning. If the same sites are used to establish a benchmark at the 25th percentile, then 25% of those sites will fail to meet the benchmark and 75% of them will meet it, which is an arbitrary finding. • Carefully choose the screening criteria used to identify best available or least disturbed conditions. If screening results in the inclusion of degraded sites, the resulting benchmarks will under-protect the resource. Choose a percentile that is informed by the chosen screening approach. • In areas where site types such as ESDs aren’t available, other potential-based resource classifications can be used to group monitoring sites, including classifications based on AIM site characterization data. • Samples sizes greater than 30 are ideal for developing representative distributions. • Consider indicator distributions: Highly skewed or narrow reference distributions (e.g., very small interquartile range or difference in indicator values between the 25th and 75th percentiles), or distributions with upper or lower limits, may need to be handled differently. • Think critically about the degree of departure from reference that is allowable while still maintaining ecosystem structure and function. For more information on setting benchmarks using AIM data, see Appendix 2 of BLM Technical Note 453. Peer Reviewed Articles  Scientific research that addresses how ecosystem structure, function, and services (including habitat) relate to indicator values can be very useful for setting benchmarks. Examples include the seasonal habitat indicator values in the Greater Sage-Grouse Habitat Assessment Framework (e.g., Table 16 on p. 41) and the Greater Sage-Grouse Resource Management Plan Amendments (e.g. Table 2-2 on page 2-5 of the Idaho and Southwestern Montana Greater Sage-Grouse Approved Resource Management Plan Amendment). Habitat conditions for other species detailed in Biological Opinions provide further examples. Good sources for peer reviewed studies include Google Scholar, Journal Map, and the BLM Library. Best [CN495]practices for the implementation of benchmarks based on peer-reviewed articles: • Ensure compatibility among field methods and that results are applicable to the geographic area of interest. • Realize that the quality of all journals and published papers is not equal. • Ensure literature is current and from a reputable peer-reviewed journal. • Look for replication or corroboration of findings among multiple studies. • Cite the utilized studies and provide a rationale for why other pertinent studies were not included. Best Professional Judgment Best professional judgement should be used to validate the results of any benchmark approaches. In addition, it should always be used as one of several lines of evidence.[CN496][DJ497][CN498] Natural resource managers’ knowledge based on their experiences is one of the most widely available types of information for setting benchmarks. This information is very valuable, especially when it comes from multiple land managers with many years of experience with a variety of situations across the landscape (Knapp et al. 2011). Best practices for the implementation of benchmarks based on best professional judgement: • Work in interdisciplinary (ID) teams and be prepared to provide resumes in the event that the approach is challenged in court. • Be aware of individual or group bias. • When possible, use best professional judgment along with other information types to set benchmarks.  • Document the process used. [AN499][CN500] For more information[CN501], see BLM Rangeland and Health Handbook 4180-1; Interpreting Indicators of Rangeland Health; Chapter 4: Management Objectives in Measuring and Monitoring Plant Populations (Elzinga et al. 1998); Stoddard et al. 2006; Hawkins et al. 2010; and Karl and Herrick 2010. Reporting Monitoring Results with Benchmarks[PLJ502][PLJ503][LC504] Using benchmarks to interpret monitoring information is not a new concept for land managers. However, applying benchmarks to estimate the proportion of a landscape that achieves benchmark conditions is new. Monitoring locations are assigned a condition class (e.g., meeting vs. not meeting the benchmark) based on the departure of observed indicator value(s) from the benchmark(s). Importantly, benchmarks can vary for different monitoring locations according to biophysical characteristics and ecological potential. To conduct weighted analyses and assess the condition of a population of stream reaches, completed benchmark tools should be sent back to the NOC at the end of the study. The NOC will then derive statistical estimates of the percent of acres/kilometers in a given condition (proportional estimates) for the specified indicators and produce and additional figures to aid with interpreting these results. [CN505][LTAC506] [CN507]Example Results  Terrestrial Reporting Example with Benchmarks • Figure 4. Proportion of early brood-rearing sage-grouse habitat that is meeting the benchmark of 15-25% sagebrush cover. The objective was for sagebrush cover to meet this benchmark across 80% of the habitat area, but it was not achieved. For nesting and early brood-rearing sage grouse habitats, one objective is for sagebrush cover to be greater than 15% and less than 25% across 80% of the habitat area. The benchmark in this case is greater than 15% and less than 25% sagebrush cover, and it is set in policy based on sage grouse research (e.g., Stiver et al. 2015). The proportion of the habitat area required to meet the benchmark is 80%. We can estimate the proportion of habitat area meeting the sagebrush cover benchmark based on the number of monitoring sites achieving benchmarks. Sagebrush cover values at 19 monitoring sites in early brood-rearing habitat were compared against this benchmark (Table 1). Overall, 33% of early brood-rearing habitat met this benchmark (Fig. 1). Given that the objective for sagebrush cover was to meet the benchmark across 80% of the habitat area, the objective was not achieved. • Table 1. Example sagebrush cover data from monitoring sites and how it relates to the benchmark in the objective. Site 1 achieves the benchmark of 15-25% sagebrush cover set forth in the objective whereas Sites 2 and 3 do not. • Sagebrush cover is only one indicator used to assess sage grouse habitat. To complete site-scale habitat suitability ratings for a HAF assessment (Stiver et al. 2015), the IDT would take into account multiple indicators and the proportion of the assessment area meeting benchmarks. • • Lotic Reporting Example with Benchmarks • Figure 5. Distribution of bank stability values observed among a network of 30 least disturbed reference sites. Bank stability decreases in response to stress; thus, the lower 25th and 5th quantiles of the reference distribution were used to define benchmarks to differentiate ‘Minimal’, ‘Moderate’, or ‘Significant’ departure from reference condition. These quantiles correspond to bank stability values of 87% and 75%, respectively. The objective is to maintain “Minimal” or “Moderate” departure from reference conditions, or bank stability of greater than 75%. To assess whether stream channels are maintaining proper form and function, bank stability is a common indicator. For example, managers might seek to maintain bank stability greater than 75%, depending on stream type, for 90% of stream kilometers with 90% confidence over 10 years. The benchmark in this objective is for bank stability to be greater than 75%, depending on stream type. The degree of allowable departure from this benchmark is 20% with 90% confidence.  This benchmark was derived based on the natural range of variability for bank stability across a network of least disturbed sites (Fig. 2). In contrast, the benchmark in the terrestrial example was set in policy based on research. To evaluate whether the bank stability objective was achieved, first we can look at the distribution of bank stability at monitoring sites in relation to least-disturbed reference sites. Using the above condition classes (Fig. 2), we can classify each of these 20 sites as having ‘Minimal’, ‘Moderate’, or ‘Significant’ departure from reference depending on their departure from the range of reference conditions (Table 2; Fig. 3). Table 2. Example bank stability data from monitoring sites, along with condition classes based on least disturbed reference sites. Sites 2 and 3 achieve the benchmark of 75% bank stability set forth in the objective. • Figure 6. Bank stability compared between 20 monitoring sites and 30 least disturbed ‘reference’ sites. Monitoring sites with bank stability values falling below the 25th quantile of reference (87% bank stability) are considered to have ‘moderate’ departure from reference condition ratings. Monitoring sites with values below the 5th quantile (75% bank stability) are considered to have ‘significant’ departure Finally, we can estimate the proportion of stream kilometers in each of the condition ratings based on the number of sites achieving benchmarks. The objective was for 90% of stream kilometers to have greater than 75% bank stability (i.e., scoring “Minimal” or “Moderate”). A total of 91.7% of stream kilometers met this benchmark, with 83.4% of stream km achieving minimal departure from reference conditions and 8.3% achieving moderate departure from reference conditions (Fig. 4). Therefore, the objective was achieved. • Figure 7. Proportion of stream kilometers having minimal, moderate, or significant departure from least disturbed reference conditions. The objective was for 90% of stream kilometers to have greater than 75% bank stability (i.e., scoring “Minimal” or “Moderate”; green and yellow bars), and this objective was achieved. • [KA508]  • Appendix C: Sample Sufficiency Tables[NM509] AIM Sample Sufficiency Tables Introduction: AIM practitioners can use the margin of error (MOE) tables below to better understand the total sample sizes that are need to obtain acceptable confidence intervals. These tables can only be used if there is enough available AIM data to estimate the proportions of the resource that are and are not meeting monitoring objectives. The statistical foundations on which these tables are based can only be applied to proportional estimates of resource condition, where point weights are used in the calculations. Therefore, these tables should not be applied to a plot-counting approach. MOE is half of a confidence interval. Complimentary landscape percentages (e.g. 5/95, 10/90, 20/80) have the same MOE, so if the proportions of the resource that are meeting and are not meeting the objective are complementary, then simply multiply the MOE by two to derive the width of the entire confidence interval. In some scenarios, the proportions of the resource that are and are not meeting the objective may not be complementary (e.g. 70% meeting, 20 not meeting, 10% unknown) and thus the MOE will differ for the different proportional categories. Confidence intervals are bounded by 0 and 100%. Table Use Instructions To use the tables, practitioners will first need to know the proportions of the landscape that are estimated to be meeting and not meeting the benchmark for a given indicator, and the number of sampled points (i.e., current sample size) that were used to calculate those estimates. Second, practitioners will need to know the level of confidence that they would like to have surrounding their final proportional estimates. To obtain a MOEs, select the table that corresponds to the desired confidence level. Next, find a sample size that corresponds to the number of points that were used to derive proportional estimates of resource condition. Follow this row to the right until you reach the column that most closely matches the percentages of the resource that were observed to be meeting and not meeting the specified benchmark. The value displayed in this cell is the estimated margin of error (MOE) for the proportion based on the current sample size. Example An IDT wants to understand what sample size they need to have to be 80% confident that they are meeting one of their objectives with a reasonably narrow confidence interval (CI). The current sample size (N) for the reporting unit is 10 points. Data collected at these 10 points were used to estimate that 80% of the resource is meeting their objective and 20% is not. Using the Table 1, the IDT determined that the MOE for their data is 17.1% for both proportions of the landscape, meaning that the width of their CIs is 34.2%. Since this value is quite large, the IDT decided to try to reduce the width of the CIs to 15% or less. So they used to table to determine that they should attempt to sample 40 additional points to achieve a total sample size of 50, which will hopefully reduce the MOEs to 7.3%, and the width of the CIs to 14.6%. Table 1. Margin of error (MOE) estimates for an 80% confidence level. The highlighted cell in this table corresponds to the example above. Sample size (N) Percentage of the resource either meeting/not meeting, or not meeting/meeting a benchmark 5/95 10/90 15/85 20/80 25/75 30/70 35/65 40/60 45/55 50/50 5 14.0 19.2 22.9 25.6 27.8 29.4 30.6 31.4 31.9 32.0 10 9.3 12.8 15.2 17.1 18.5 19.6 20.4 20.9 21.2 21.4 15 7.5 10.3 12.2 13.7 14.8 15.7 16.3 16.8 17.0 17.1 20 6.4 8.8 10.5 11.8 12.7 13.5 14.0 14.4 14.6 14.7 25 5.7 7.8 9.3 10.5 11.3 12.0 12.5 12.8 13.0 13.1 30 5.2 7.1 8.5 9.5 10.3 10.9 11.4 11.7 11.8 11.9 35 4.8 6.6 7.8 8.8 9.5 10.1 10.5 10.8 10.9 11.0 40 4.5 6.2 7.3 8.2 8.9 9.4 9.8 10.0 10.2 10.3 45 4.2 5.8 6.9 7.7 8.4 8.8 9.2 9.5 9.6 9.7 50 4.0 5.5 6.5 7.3 7.9 8.4 8.7 9.0 9.1 9.2 55 3.8 5.2 6.2 7.0 7.6 8.0 8.3 8.5 8.7 8.7 60 3.6 5.0 6.0 6.7 7.2 7.6 8.0 8.2 8.3 8.3 65 3.5 4.8 5.7 6.4 6.9 7.3 7.6 7.8 8.0 8.0 70 3.4 4.6 5.5 6.2 6.7 7.1 7.4 7.6 7.7 7.7 75 3.2 4.5 5.3 6.0 6.4 6.8 7.1 7.3 7.4 7.4 80 3.1 4.3 5.2 5.8 6.2 6.6 6.9 7.1 7.2 7.2 Table 2. Margin of error (MOE) of percentage estimates for an 85% confidence level. Sample size (N) Percentage of the resource either meeting/not meeting, or not meeting/meeting a benchmark 5/95 10/90 15/85 20/80 25/75 30/70 35/65 40/60 45/55 50/50 5 15.7 21.6 25.7 28.8 31.2 33.0 34.3 35.3 35.8 36.0 10 10.5 14.4 17.1 19.2 20.8 22.0 22.9 23.5 23.9 24.0 15 8.4 11.5 13.7 15.4 16.7 17.6 18.4 18.8 19.1 19.2 20 7.2 9.9 11.8 13.2 14.3 15.1 15.8 16.2 16.4 16.5 25 6.4 8.8 10.5 11.7 12.7 13.5 14.0 14.4 14.6 14.7 30 5.8 8.0 9.6 10.7 11.6 12.2 12.7 13.1 13.3 13.4 35 5.4 7.4 8.8 9.9 10.7 11.3 11.8 12.1 12.3 12.3 40 5.0 6.9 8.2 9.2 10.0 10.6 11.0 11.3 11.5 11.5 45 4.7 6.5 7.8 8.7 9.4 9.9 10.3 10.6 10.8 10.8 50 4.5 6.2 7.3 8.2 8.9 9.4 9.8 10.1 10.2 10.3 55 4.3 5.9 7.0 7.8 8.5 9.0 9.3 9.6 9.8 9.8 60 4.1 5.6 6.7 7.5 8.1 8.6 8.9 9.2 9.3 9.4 65 3.9 5.4 6.4 7.2 7.8 8.2 8.6 8.8 8.9 9.00 70 3.8 5.2 6.2 6.9 7.5 7.9 8.3 8.5 8.6 8.7 75 3.6 5.0 6.0 6.7 7.2 7.7 8.0 8.2 8.3 8.4 80 3.5 4.9 5.8 6.5 7.0 7.4 7.7 7.9 8.1 8.1 Table 3. Margin of error (MOE) of percentage estimates for a 90% confidence level. Sample size (N) Percentage of the resource either meeting/not meeting, or not meeting/meeting a benchmark 5/95 10/90 15/85 20/80 25/75 30/70 35/65 40/60 45/55 50/50 5 17.9 24.7 29.4 32.9 35.6 37.7 39.2 40.3 40.9 41.1 10 12.0 16.5 19.6 21.9 23.7 25.1 26.2 26.9 27.3 27.4 15 9.6 13.2 15.7 17.6 19.0 20.2 21.0 21.5 21.9 22.0 20 8.2 11.3 13.5 15.1 16.3 17.3 18.0 18.5 18.8 18.9 25 7.3 10.1 12.0 13.4 14.5 15.4 16.0 16.4 16.7 16.8 30 6.7 9.2 10.9 12.2 13.2 14.0 14.6 15.0 15.2 15.3 35 6.2 8.5 10.1 11.3 12.2 12.9 13.4 13.8 14.0 14.1 40 5.7 7.9 9.4 10.5 11.4 12.1 12.6 12.9 13.1 13.2 45 5.4 7.4 8.8 9.9 10.7 11.4 11.8 12.2 12.3 12.4 50 5.1 7.1 8.4 9.4 10.2 10.8 11.2 11.5 11.7 11.8 55 4.9 6.7 8.0 9.0 9.7 10.3 10.7 11.0 11.1 11.2 60 4.7 6.4 7.6 8.6 9.3 9.8 10.2 10.5 10.6 10.7 65 4.5 6.2 7.3 8.2 8.9 9.4 9.8 10.1 10.2 10.3 70 4.3 5.9 7.1 7.9 8.6 9.1 9.4 9.7 9.8 9.9 75 4.2 5.7 6.8 7.6 8.3 8.8 9.1 9.4 9.5 9.6 80 4.0 5.6 6.6 7.4 8.0 8.5 8.8 9.1 9.2 9.2 Table 4. Margin of error (MOE) of percentage estimates for a 95% confidence level. Sample size (N) Percentage of the resource either meeting/not meeting, or not meeting/meeting a benchmark 5/95 10/90 15/85 20/80 25/75 30/70 35/65 40/60 45/55 50/50 5 21.4 29.4 35.0 39.2 42.4 44.9 46.7 48.0 48.8 49.0 10 14.2 19.6 23.3 26.1 28.3 29.9 31.2 32.0 32.5 32.7 15 11.4 15.7 18.7 20.9 22.7 24.0 25.0 25.7 26.1 26.2 20 9.8 13.5 16.1 18.0 19.5 20.6 21.4 22.0 22.4 22.5 25 8.7 12.0 14.3 16.0 17.3 18.3 19.1 19.6 19.9 20.0 30 7.9 10.9 13.0 14.6 15.8 16.7 17.4 17.8 18.1 18.2 35 7.3 10.1 12.0 13.4 14.6 15.4 16.0 16.5 16.7 16.8 40 6.8 9.4 11.2 12.6 13.6 14.4 15.0 15.4 15.6 15.7 45 6.4 8.9 10.6 11.8 12.8 13.5 14.1 14.5 14.7 14.8 50 6.1 8.4 10.0 11.2 12.1 12.8 13.4 13.7 13.9 14.0 55 5.8 8.0 9.5 10.7 11.6 12.2 12.7 13.1 13.3 13.3 60 5.6 7.6 9.1 10.2 11.1 11.7 12.2 12.5 12.7 12.8 65 5.3 7.4 8.8 9.8 10.6 11.2 11.7 12.0 12.2 12.2 70 5.1 7.1 8.4 9.4 10.2 10.8 11.2 11.6 11.7 11.8 75 5.0 6.8 8.1 9.1 9.9 10.4 10.9 11.2 11.3 11.4 80 4.8 6.6 7.9 8.8 9.6 10.1 10.5 10.8 11.0 11.0 [KEJ510]   Appendix D: Understanding the Master Sample[PLJ511][NAM512][KA513] The use of statistically valid sample designs for selecting monitoring locations enables one to report on the condition and trend of all monitored renewable resources within an area of interest with known levels of precision and accuracy. Additionally, by using similar field methods among monitoring efforts, data can be combined among monitoring efforts and used to inform land management decisions at multiple spatial scales and across data needs. However, to realize these benefits, sample designs must be done in a consistent, compatible manner including the geospatial layers used to define the study area. Figure 1. Two million terrestrial (left) and 67,000 lotic (right) master sample points for potential sampling.[DCJ514][RV515] Prior to 2016, individual sample designs were developed for each unique AIM project, requiring the compilation of geospatial data layers, statistical expertise, and specialized software packages. Consequently, the development of individual designs was both time and resource intensive. Furthermore, the merging of data from individual projects to produce estimates at larger scales or to improve estimates at smaller scales was complicated. With the increased application of AIM to meet BLM monitoring and assessment needs, the AIM team sought to streamline and standardize the sample design process to increase consistency, reduce the required time and expertise, and to assist field offices more efficiently. The result is a ‘Master Sample’ for the sampling of terrestrial vegetative (i.e., upland) and lotic aquatic (streams and rivers) resources on BLM lands within the contiguous U.S. A master sample consists of a very large number of potential sample locations (see details below) from which project-level sample designs can be selected for specific monitoring needs. Potential sampling locations in the master sample are attributed with many different geospatial layers (e.g., BLM administrative boundaries, watersheds, topography, soils) to facilitate the selection and stratification of monitoring locations for specific projects. Because the geospatial data layers used in the master sample are standardized, as well as the process for selecting the sample points, the resulting monitoring sample designs and subsequent data can be more easily shared, integrated, and used for other applications. A list of all standardized geospatial data layers is found below:[KA516] Attribute Location Download date Land Ownership http://www.geocommunicator.gov/GeoComm/services.htm#Download 9/1/2015 BLM district and field offices http://www.geocommunicator.gov/GeoComm/services.htm#Download 9/1/2015 BLM Grazing Allotments http://www.geocommunicator.gov/GeoComm/services.htm#Download 9/1/2015 BLM Herd Management http://www.geocommunicator.gov/GeoComm/services.htm#Download 9/1/2015 Sage Grouse Focal Areas BLM Internal 9/1/2015 Sage Grouse Priority Habitat BLM Internal 9/1/2015 Sage Grouse General Habitat BLM Internal 9/1/2015 BLM EIS Boundaries for Use in Analysis BLM Internal 9/1/2015 State and County Boundaries http://www.census.gov/geo/maps-data/data/tiger-cart-boundary.html 9/1/2015 BLM existing Land Use Planning Areas BLM Internal 8/28/2015 BLM In Progress Land Use Planning Areas BLM Internal 8/28/2015 BLM Historic Land Use Planning Area BLM Internal 8/28/2015 BLM Solar Energy Zones http://blmsolar.anl.gov/maps/shapefiles/ 9/1/2015 BLM Wilderness Areas BLM Internal 9/1/2015 BLM Wilderness Study Areas BLM Internal 9/1/2015 National Monument, National Conservation Area Boundaries BLM Internal 9/1/2015 BLM Wild and Scenic Rivers BLM Internal 9/1/2015 SSURGO Map unit http://www.nrcs.usda.gov/wps/portal/nrcs/detail/soils/survey/geo/?cid=nrcs142p2_053627 9/1/2015 Elevation 9/1/2015 Omernik and EPA Ecoregions (Levels 1, 2, 3, 4) http://www.epa.gov/wed/pages/ecoregions/na_eco.htm 9/1/2015 Landfire Biophysical Settings http://www.landfire.gov/datatool.php 9/1/2015 Strahler stream order categories http://www.horizon-systems.com/NHDPlus/NHDPlusV2_data.php 4/14/2014 Watershed Boundaries – HUC 6, 8, 10 and 12 digits http://nhd.usgs.gov/data.html 9/1/2015 Example applications of the AIM Aquatic and Terrestrial Master Samples • Project effectiveness monitoring (> ~250 acres or ~5 stream kilometers) • Grazing permit renewals (> ~250 acres or ~5 stream kilometers) • Watershed assessments • Resource management plan effectiveness monitoring • State level reporting • Ecoregional or national level reporting Benefits of using a master sample • Increased efficiency and standardization of sample designs  • More efficient and effective field office assistance with survey designs • Easier and more defensible applications of resource conditions across spatial scales • Increased ease and defensibility of analyses that combine data from multiple AIM monitoring efforts  Terrestrial master sample details • Spatial extent: BLM lands within the 13 contiguous western states • Base layers used to identify BLM lands: Surface Management Agency (SMA) database published July 2015 by the National Operations Center (NOC) • Survey design approach: generalized random tessellation stratified sampling (GRTS); unweighted point selection with no a priori stratification • Point density: 1 point per 35 hectares for a total of 2 million possible sample locations • Example stratum to be used for survey designs: BLM Field/District Offices, BLM Allotments, Landfire Biophysical Settings, Greater Sage-grouse PHMA/GHMA, EPA Ecoregions, SSURGO Soil Map Units Lotic master sample details • Spatial extent: 13 contiguous western states with BLM land • Base layers used to identify BLM streams and rivers: USGS National Hydrography Dataset (NHD) version 2.0, medium resolution (1:100,000) and Surface Management Agency (SMA) data layer published July 2015 by the National Operations Center (NOC) • Survey design approach: generalized random tessellation stratified sampling (GRTS); unweighted point selection with no a priori stratification • Point density: one point per 0.5 km of perennial stream for a total of over 67,000 possible sample locations • Example stratum to be used for survey designs: Strahler stream order categories, hydrologic unit codes, BLM field office boundaries, BLM districts, Greater Sage-grouse PHMA/GHMA  [DCJ517][YJ518][RV519] Appendix ???? 1. Wild Horse and Burro “hit areas” Wild horse and burro populations are known to have a major impact on vegetation, particularly within riparian areas. However, quantifying the impact of wild horse and burro population on vegetation through remote sensing remains challenging. Where available, high-quality lidar data can be used to quantify vegetation structure and density. However, repeat high-quality lidar data to assess change in vegetation structure remains rare. In theory, high-resolution imagery such as WorldView and Planet could be used to identify stud piles, but in practice these are hard to distinguish from vegetation. Comparisons between current and historic high-resolution imagery may provide some information on wild horse and burro population impact by identifying increases in “game trails.” While such trails may also result from other causes, such as increased recreation, they may also be indicative of increased wild horse and burro use and could be used to highlight areas for increased field monitoring. Figure: A substantial increase in game trails can be seen along Jerry Creek in the Little Book Cliffs Wild Horse Range near Palisades, CO from September 30, 1980 to August 30, 2018. 2. Trend analysis A growing archive of remotely sensed data and concomitant increase in computing power now allows us to perform more robust trend analysis over broader spatial scales. Algorithms such as Continuous Change Detection and Classification (CCDC, Zhu and Woodcock, 2014) and LandTrender (Kennedy et al., 2010) allow users to test for significant trends in pixel reflectance while accounting for seasonal variation. Within the LandCART platform, terrestrial AIM indicators can be modeled over nearly 40 years, and a Sen’s slope test can be performed to determine whether there is a significant trend in the indicator. Additionally, most BLM surface land has multiple aerial film projects dating back to the 1960s and 70s, and other agencies have collected aerial imagery as far back as the 1930s. The orthomosaics generated through photogrammetric processing of these stereo images often have sub-meter resolution. Moreover, especially in the 1970s, much of this aerial imagery was captured on color-infrared film, highlighting vegetation extent. While older aerial imagery often has poor radiometric quality, it can be used as an early time point for basic vegetation mapping. For instance, these aerial film datasets have been used to map forest cover change (and its effect on natural springs) as well as juniper encroachment into sagebrush habitat. Figure: Riparian area and stock pond near Manti, UT in 1975. Kennedy, Robert & Yang, Zhiqiang & Cohen, Warren. (2010). Detecting trends in forest disturbance and recovery using yearly Landsat time series: 1. LandTrendr — Temporal segmentation algorithms. Remote Sensing of Environment. 114. 2897-2910. 10.1016/j.rse.2010.07.008. Sen, P.K.J.J.o.t.A.s.a. Estimates of the regression coefficient based on Kendall's tau. J. Am. Stat. Assoc. 1968,63, 1379-1389. Zhu, Zhe, and Curtis. E. Woodcock. "Continuous change detection and classification of land cover using all available Landsat data." Remote Sensing of Environment 144 (2014) 152-171. 3. Remote sensing to evaluate critical concepts/additional line of evidence (riparian & wetlands use) Lotic and lentic systems are highly dynamic throughout the year and given the difficulty in access to conduct field surveys, remote sensing can provide an additional line of evidence for documenting the extent and stability of wetland and riparian features. Planet imagery available at about 4-meter resolution allows the user to browse near daily imagery to document daily and seasonal wetland fluctuation. On a coarser scale, the Landsat archive has been classified into various surface water extent datasets (e.g., through the European Commision’s Joint Research Center), so we can document longer-term trends in surface water extent and stability. For example, we can see decreasing inundation frequency in the Blanca Wetlands in the San Luis Valley (example below). https://developers.google.com/earth-engine/tutorials/tutorial_global_surface_water_03 Topics for Implementation Team review:[KA1] 1) Roles and Responsibilities 2)Appendices and numbering 3)Subheader numbering (including steps and if we want these to show up in TOC) 4) Need to update TOC and heading numbers 5) How to finish addressing some of Nicole's comments 6) Highlights in yellow need to be finished or reviewed 7) Incorporating remote sensing into document 8) Any additional literature cited? 9) Benchmarks? --> Is this section complete? Appendix or not? 10) How to deal with Master Sample in Appendix? And is the table still relevant/useful? 11) TOOLS sections!! 12) Section 10 Figures and 11 Photos need to be completed. 13) Revisit designs --> did this section get created? mailto:lreynolds@blm.gov do a search for "riparian" , then "wetland", and "lentic" to make sure terms are being used correctly. Our resource is always "Riparian and Wetland" not wetland first or any other variation.[RV2][RV2] Heads up that there's a mix of "ID team" and "IDT" used in this document[NS3][NS3] Subheader section 1[NM4] Aleta will tweak a little more. [NM5][NM5] This seems a bit repetitive with the section above -- can they be integrated? Also, the principles were already listed in chapter 1 -- do you need them both places?[KJ6][KJ6] Subheader - section 1[NM7] [KEJ8]Could reference the longer table here: Table 1: AIM-Related Policy Summary – How AIM Supports the BLM Mission (Section 9.0 – Tables).[KEJ8][KEJ8][KEJ8] @Krott, Meghan A [YJ9R8][YJ9R8][YJ9R8] This should be Section 2.0[KA10] Section 2.1[KA11] Section 2.2[NM12] Section 2.2.1[NM13] Section 2.2.2[NM14] Section 2.2.3[NM15] Section 2.2.4[NM16] Looking at the entire document as it is now versus way back when I was introduced to it, I'm thinking the RS narrative that we have written might need to be it's own chapter/section with pointers to sections/steps where RS can be applied. I currently have this summary/overview for sectino 1.3.2 and many individual paragraphs that fit into the workflow in various places, but breakup the flow of the general narrative. I have inserted those paragraphs where they were initially planned to be, but I think we should make one chapter for RS.[SL17][SL17][SL17] Nevermind - I have NOT inserted the rest of the RS narrative. Maybe they should be an appendix if not with this summary?[SL18R17][SL18R17][SL18R17] Here's all the RS verbiage: AIM Desk Guide TOC RS draft.docx[SL19R17][SL19R17][SL19R17] @Reynolds, Lindsay V read with an eye towards how to integrate Remote Sensing. It's own section OR integrated into the sections? Currently, it seems clunky to be integrated.[RV20] And BPSS? or some mention of a budget tool?[RV21][RV21] I think this needs some discussion on how to incorporate or at least briefly mention budget here in project planning/initiation[RV22R21][RV22R21] [CN23R21]Yes and I think it fits in Step 1 well. [CN23R21] @Reynolds, Lindsay V @Nafus, Aleta M @Claridge, Bonnie C please review and add content and context. Unclear about this section. Thanks![YJ24R21][YJ24R21] I think budget and funding have been adequately incorporated here in Step 1. Whoever added this content did a great job. I'm glad it discusses budget anf funding but doesn't use terms and concepts like BPSS that can change and vary between years and AIM efforts. Now resolving this comment.[RV25R21] [KEJ26]Here is a budget cheatsheet, which is really aimed more at the state leads, but likely has some nuggets for project leads also: AIM Budget Cheatsheet_220104.docx Reviewers: Would the more detailed timeline/calendar linked below be more useful to users of this document compared to the Implementation Calendar above?[KA27] https://doimspp.sharepoint.com/:x:/r/sites/ext-blm-oc-naim/_layouts/15/Doc.aspx?sourcedoc=%7B5B0AA2E2-5D48-4A10-B8A6-94360BE7435F%7D&file=AIM%20General%20Timelines.xlsx&action=default&mobileredirect=true&isSPOFile=1&clickparams=eyJBcHBOYW1lIjoiVGVhbXMtRGVza3RvcCIsIkFwcFZlcnNpb24iOiIyNy8yMjExMzAwNDEwMCIsIkhhc0ZlZGVyYXRlZFVzZXIiOmZhbHNlfQ%3D%3D&cid=36609a39-84f6-41bc-a6ca-5b54b2e45502 I think it's better to have this more simplified figure here in the guide, but I think we should link to the more detailed timeline calendar here.[RV28R27] [KEJ29]This is Shannon's suggestion. As above, I think we could integrate it with the rest of step 5 and shorten it to hit on the high points. Also, let's be careful that we aren't suggesting that we completely redo the design too often based on new information such as remote sensing.[KEJ29] I'm thinking that for terrestrial, you probably wouldn't do anything to the original LUP MDW but a major disturbance might warrant implementation changes or even a short term intensification design. Rewriting to reflect. @Kachergis, Emily J what do you think?[NM30R29][NM30R29][NM30R29][NM30R29] Review section @Kachergis, Emily J @Claridge, Bonnie C @Reynolds, Lindsay V @Savage, Shannon L [YJ31R29][YJ31R29] I've reviewed and this looks good and makes sense to me. There is some danger of overlap/repetition with 3.3.1.1Using Remote Sensing to Inform Monitoring above, but in reading both I think they are good as-is.[RV32R29][RV32R29] [SMR33]Crew Hiring is in 5.3.1.1 move there?[SMR33] [PLJ34]Thinking through this process I suggest including language about MDWs being resource specific. Do we recommend doing a MDW for each AIM resource or one that incorporates all three? I bring this up because our template and example MDW contains both terrestrial and lotic language but I haven’t seen a MDW in practice that contains both so that might confuse an end user….[PLJ34] [CN35R34]The general issue is that we want people to understand the process/questions are EXACTLY the same for all resources, but the details are different so we need different tables, etc. Warrants discussion on the best way to handle it. There have only been failed examples of combined resource MDWs, it gets too complicated and too much info to put in one document. [CN35R34] Should figures be numbered consecutively throughout the document or are they restarting at 1 for each section? Previous section had a "Figure 2"... Consider consistency in labeling figures throughout.[KA36][KA36] [CN37R36]Yes, consecutively, so if there were 2 figures prior this would be fig 3. [CN37R36] @Krott, Meghan A @Stropky, Michelle R @Yokomizo, Erick J For us to resolve as stated.[YJ38R36][YJ38R36] [NAM39]We are inconsistent about whether we are left justifying or centering [NAM39][NAM39] [CN40R39]We should “left” correct? [CN40R39][CN40R39] @Yokomizo, Erick J @Stropky, Michelle R @Krott, Meghan A Lets make sure everything is Left justifying. Consistency. [YJ41R39][YJ41R39] [NAM42]Should we update this figure to say “apply stratification (if required), and select appropriate monitoring locations (random or targeted) to meet monitoring objectives” [CN43R42]For step 6 correct? [CN44R42]Step 4, we now have them select indicators. As we continue to evolve with who we’re working with I feel like we need to expand to all indicators. For example targeted reaches might not collect all the core indicators, step down monitoring might not collect all, etc. [CN45R42]Steps 8-12 are associated with Adaptive management more so than this section about Design. Is that okay that we’ve put it here. [CN46R42]Figure legend might need to be updated [CN47R42]Term “Program” in left column seems odd.. it’s really a monitoring effort or plan… but not sure program is the right term? Find and/or update figure for this section @Yokomizo, Erick J @Stropky, Michelle R @Krott, Meghan A [YJ48R42] I think this figure should be moved up to the end of Section 3 Planning and Initiation, but then can still be referenced in the Overview of Design (4.1). And yes, incorporate all of Aleta and Nicole's suggestions above.[RV49R42] [CN50]Why is the tools section deleted? This is one of the consistent subheadings for all sections Assigned for further discussion with Implementation team ALL[YJ51R50] Just to put it here for when this section gets fleshed out, there's the Balanced Design Tool (BET) at https://landscapetoolbox.org/balanced-design-tool[NS52] [CN53]Do we need any text following this before getting into the steps?[CN53] We need analysts to review all of this 4.3 section thoroughly, there are large amount of comments, thoughts and suggestions that need resolving. Please work through these. @Laurence-Traynor, Alexander C @Alexander, Patrick J @Miller, Janet L Ruth[YJ54R53][YJ54R53] [KEJ55]Is this actually just the monitoring design worksheet text, verbatim? If so -- let's make that clear up front. If not -- why not lean on that existing text which has already dealt with this complex topic? [LTAC56R55]Yes, this text was originally copied from the MDW instructions but has since been heavily updated/modified and so will replace the existing instructions. As such I don’t think that outdated document needs to be mentioned here. Once this desk guide is published well need to remember to remove this from the website: https://www.blm.gov/sites/blm.gov/files/docs/2022-04/2020%20MDW%20Website%20Instruction%20Updates_12.17.2020.pdf [LTAC57]I just realized there is no discussion of benchmark groups here and that term is first introduced in the A and R section. I feel like benchmark groups should be covered under monitoring objectives [CN58]May need to return after reading through all of this but it seems like we give some background/high level info about the step then we tell them exactly what to do. Consider breaking it down into those two sections and being consistent throughout. Start with the What is this and why it’s important. Then walk through step by step how to do it. [PLJ59]This is confusing me because above in the first sentence we say that a monitoring effort has management objectives plural, while this sentence seems to say a each monitoring objective needs its own sampling effort…. Would help to clarify.[PLJ59] [CN60R59]I agree, this section also starts to get at MONITORING objectives, thinking about conditions, and the “design” (targeted, random) [CN60R59] Since this was highlighted, I decided this was the preferred term and changed places that said management objectives to management goals for consistency. Might consider a find/replace.[ML61][ML61] assigned to @Yokomizo, Erick J and @Krott, Meghan A to do find/replace[KA62R61][KA62R61] Megan and I discussed adding Management Goals and Management Objectives as definitions: the Internet says that A goal is an achievable outcome that is generally broad and longer term while an objective is shorter term and defines measurable actions to achieve an overall goal. This means that we really mean both. Tech Note 453 uses objectives so we thought that continuing to use objectives for now would make the most sense[NM63R61][NM63R61] I think I would argue the monitoring objective defines a measurable action to achieve your management goal. I know in discussions with Nicole prior to the A & R training we decided to use management goal and monitoring objective because using objective for both got confusing. But I don't disagree that your management goal can also be an objective. I also understand if it makes most sense to follow the language of TN 453 rather than change it here. [ML64R61][ML64R61] [NS65]Noted elsewhere too, but there's a mix of "IDT" and "ID team" used in this document, so that should probably be harmonized[NS65] This is confusing me because above in the first sentence we say that a monitoring effort has management objectives plural, while this sentence seems to say a each monitoring objective needs its own sampling effort…. Would help to clarify.[PJ66] I agree, this section also starts to get at MONITORING objectives, thinking about conditions, and the “design” (targeted, random) [CN67R66] The first sentence plus this is all I think that is needed here. [CN68] I also think it should state “ Multiple management objective should be addressed but must be balanced with adequate resources (e.g.,….)…” Suggested deletion of text that was confusing. [NM69R68] change from "indicators to monitor" to "methods to collect"? Trying to be more intentional about use of the terms indicators and methods. We colllect data using methods. We calculate indicators from those methods.[ML70][ML70] [LTAC71R70]I agree we need to stop blurring the lines between indicators and methods. However, I think this section is trying to (poorly) give instructions for additional indicators to calculate from the core methods. I would say this is a very minor and rarely done step for terrestrial at least. Step 4 should focus on additional (supplemental) methods to collect. Right now this section reads as redundant with step 4 to me. I suggest writing to focus on additional indicators to calculate from core methods or remove and focus on additional methods in step 4.[LTAC71R70] [NS72]I've seen "Core Methods", "Core methods", and "core methods". Which capitalization scheme is our official one these days?[NS72] [CN73] Should we state that for Random designs and Management objectives related to LUP/RMPs that all Core must be collected, but for other designs there is room to use just a subset of the core methods? [CN73][CN73] [DCJ74R73]Seems like an important point to make for at least a couple of reasons: 1) For non-LUP/RMP designs, doing a subset of core reduces time per plot and could allow either more points or less total field time. 2) For those with concerns about time commitment for doing AIM, the option to do a subset of core might encourage adoption.[DCJ74R73][DCJ74R73] [CN75R73] [CN75R73][CN75R73] Added text to address - please verify for all resources.[NM76R73][NM76R73][NM76R73] this works for lotic[ML77R73][ML77R73][ML77R73] [CN78]Here we start with the “what to do” and follow with the what this step is/why it’s important… [CN78] I switched the paragraphs so we start with the why first, consistent with step 1a.[KA79R78] Add Riparian and Wetland also??[KA80][KA80][KA80] I generalized it by just taking out the resource names. I think this works.[RV81R80][RV81R80][RV81R80] Do we want to provide an example of this? We could use springsnail surveys in NV/UT at R&W sites as a potential example.[KA82] I propose we include this example in the actual monitoring design worksheet example guide and not here as we do not appear to have examples in other steps. If we do have examples in other steps we should probably remove or ensure there is consistency between steps.[NM83R82] Searched this document for the term “supplemental indicators” to get an idea how they are being addressed. A couple of thoughts: 1) Is there a list of/resource for known/common supplemental indicators and their methods? This might help folks understand possibilities and also not have to start from scratch. 2) Should we be more explicit about the data management responsibilities that accompany supplementals? E.g. a. NOC AIM does not support ingest and management of supplemental indicator data. b. How to create forms for, access, and manage supplemental data. c. How to document and make discoverable supplemental data. etc...). Seems like there could be a whole separate document just on supplementals... (and maybe there is?)[DJ84][DJ84] We do not have a list because supplementals can be whatever they need and that list would be HUGE! Somewhere I thought we did define supplemental better about that we do not train, support, manage data, etc. for them. [CN85R84][CN85R84] With Aleta’s new conversations about contingents/supplementals and central NON-AIM storage I think we could mention something somewhere, but I would love to stay away from too much detail since we say we don’t support them. I think there’s a comment later in the doc about supplementals too To address all of this we can just add Emily's briefing paper as a reference when it is finalized[NM86R84][NM86R84] Should we reference the later step that talks about how to select methods etc.? Just something like “see step X for more information about supplemental methods” [CN87] Add Emily's briefing paper on supplemental/contingent indicators as a reference once it is developed. [NM88] [NS89]4.3.4.1[NS89] [NS90]I'm pretty sure that the step labels are coming from the MDW, but I'm finding them incredibly confusing in combination with the unifying numbering used by the document, at least when used like this. I agree, so I've added-in the 7 steps from the MDW as a list just above here, in the intro to the 7 steps. I think the 7 steps list could also be presented in a table, that might look nice, I'm not sure. But I think either a list or a table of the 7 steps is essential at the outset of this section. Then, I suggest deleting the numbering is currently in front of each step's description below. We could put "MDW in front of each Step throughout Section 4, so they'd read, "MDW Step 3a" or "MDW Step 4c" etc. That way, it'd be clear what section you're in (4.0 Design) without the unifying numbering.[RV91R90] [CN92]Yes confusing also because we’re telling them to ID the STUDY area, but then explain the reporting unit and we give examples of them in this sentence, but don’t really explain the study area very well. [CN92] From Glossary: Study Area: Defines the extent of your population and is the maximum area you want to draw conclusions about. See Project Area. Project Area: Describes the broadest outline of a project. Usually, the boundary of a field office, district office, or other administrative boundary. A project area contains the target population (e.g., BLM land within a field office boundary). See also Study Area. This is a little confusing, consider simplifying and reducing the amount of parenthesis items since they are discussed more later. [KA93][KA93] I would remove reference to strata since we don't define it til a couple sections later.[ML94][ML94] [NS95]Do we need to more clearly differentiate this from the study area? Something like: "A target population must be limited to only places where data will be collected and fall entirely within the study area. This is in contrast to a study area which may include parts of a landscape that will not be sampled, e.g. a watershed as a study area may include privately-owned land, but the target population would not."[NS95] [NS96]I don't quite understand what this is trying to get at. Is this for revisit designs?[NS96] [LTAC97R96]Yes I believe so, I added some text to clarify this[LTAC97R96] [PLJ98]Still applicable?[PLJ98] @Miller, Janet L does lotic still use the master sample?[KA99R98][KA99R98] Maybe sometimes. For a lot of designs last year we did not. I think we did for 1 design. As time goes on, it will probably be used less. So I think deemphasize this. I hope I am not wrong on this![ML100R98][ML100R98] I'm not sure we have a specific revisit design section[KA101][KA101] modified to touch sections with information, no specific section exists. [YJ102R101] [PLJ103]What does this mean exactly?[PLJ103] Assigned to @Yokomizo, Erick J, @Stropky, Michelle R and @Krott, Meghan A to fill in section with revisit info and clean up verbiage if needed.[KA104R103][KA104R103] For lotic Revisit designs should be pretty straight forward as long as the original design was meeting the monitoring needs. Their isn't usually additional information needed that I can think of.[ML105R103][ML105R103] [PLJ106]Benchmark tools mentioned here and in A&R section. How to address that redundancy/give clear guidance on where this step should/can be completed? [PLJ107R106]Also add language on how to choose one or the other. Example, “if you are renewing a design and already have benchmarks established use the resource cond table…” [PLJ108R106]Wondering if it would be worth having a section in the overview on benchmarks where we repurpose tech note 453 benchmark language pg 12 “benchmark values come from existing policy and plans….” With the new benchmark tools are people still filling out monitoring objectives in the benchmark tools? Is this first paragraph still relevant or do we want to focus on the "Resource Condition and Trend Objectives Table? @Nafus, Aleta M [KA109R106] Also, consider either moving or deleting first paragraph so we give the why before the directions.[KA110R106] I would remove mention of the benchmark tool here and have folks fill out monitoring objectives in the MDW. The excel benchmark tools are going away, no one touches those until they are ready for analysis, and even when they do use them they often don't fill out their management objectives in those tools.[ML111R106] [LTAC112R106]I'm fine with removing reference to the benchmark tools here and just directing to fill out objectives in the MDW. I do envision a future where the two places (MDW and tools) are effectively the same place or at least are able to reference each other via an objective database or some such. [NS113]Inconsistent capitalization. We need to decide if this is capitalized every time[NS113] [PLJ114]Lingering questions of how to address tools in this guide… [NS115]Inconsistent capitalization throughout the document. Initially "Management Goals" but usually "management goals". Just need to settle on one is all[NS115] Provide guidance on determining the sensitivity of AIM methods? Or would that have already been done during the process of choosing the suite of core methods when AIM was developed?[PJ116][PJ116] [DCJ117]Subject-verb agreement?[DCJ117] [CN118]This is where I think appropriate sampled designs need to be mentioned… There’s lots of options… BACI designs, Systemmatic random, GRTS, targeted, DMA/Key area, etc. But this is where they should consider this I believe[CN118] More appropriate to say field office boundary/area?[PJ119] I like this addition but I think it fits 2b better. See second paragraph about how and where in 2b[CN120] [PLJ121]Unless doing the monitoring objectives worksheet?[PLJ121][PLJ121] [NS122]4.3.2.1[NS122][NS122] [PLJ123]Just lotic?[PLJ123][PLJ123] [CN124R123]If using benchmark tool. Gets back to your question about if they should use table or tool. [CN124R123][CN124R123] Assigned to @Yokomizo, Erick J, and @Stropky, Michelle R to see if this applies to terrestrial and RW as well.[KA125R123][KA125R123][KA125R123] Does this refer to the idea of nesting designs within projects? If so, then no as terrestrial has a 1:1 design to project but - there could be reporting units within a design - so - if that is the case then yes[NM126R123][NM126R123] Yes, this is about the fact that there can be reporting units within a design, so the answer is yes for both Terrestrial and R&W. I've deleted "Lotic' so now it applies to all projects.[RV127R123][RV127R123] [NS128]I glanced quickly above and I don't think that indicators are really defined before this in the document. At the very least, this might need some examples like: "e.g., percent bare ground, percent cover by annual grasses, soil stability rating" [PLJ129]I think what’s really confusing about having benchmarks mentioned both in design and A&R is how those efforts differ or fall into the process at different times.[PLJ129] [CN130R129][CN130R129] [DCJ131]Consider making this a numbered or bulleted list (numbered lists imply priority)[DCJ131] [DCJ132]Is there some kind of hard carriage return here?[DCJ132] Why do we have examples of monitoring objectives here and on the above page? Combine[CN133] [LTAC134]Study area or sample population? In the definitions above we try to differentiate the two so this is confusing. Since were talking about sampling intensity I think sample population is the term we want here. Although we havent really defined sample population - so I replaced with target populIation [NS135]4.3.3[NS135] So I added this paragraph. Not sure if it goes here or if it is correct for other resources (terrestrial - do you wrap targeted and random points up together in weighted analysis? Lotic does not). But I thought we should somewhere distinguish between random and targeted points.[ML136] [KEJ137]I like this paragraph; not sure that it needs to be its own section though[KEJ137] [PLJ138]If these are types of Lotic strata do we need to call them out here or would it be more appropriate in the lotic specific section?[PLJ138] [CN139R138]“Land” is referring to T and “Water” to L I’m not sure… [CN139R138] [NS140]I want to make this bold and italicized![NS140] [NS141]This did say "upload" but there weren't further instructions as to where, so I figure maybe being more general would be helpful[NS141][NS141] [CN142]We actually don’t do this for our RV categories… we just force a min of 3 usually. I don’t think we should remove though because it’s very applicable to other stratification. [CN142] [CN143]Less common? I think you’re saying that small wetlands are less common (or account for less of the resource) so will have inadequate representation if we do not stratify. Maybe explain fully? [CN143] FYI that’s why we use stream size as well, Rivers have less stream kilometers than Small streams so if we didn’t stratify we might only get 1 river site or none at all. We use stratification to get at least 3. @Reynolds, Lindsay V can you resolve this language please? Thanks![YJ144R143][YJ144R143] I checked and elaborated on this point to clarify what Nicole is talking about.[RV145R143] Why delete this sentence? [CN146] [CN147]Why delete this sentence? [CN147] This is currently in both paragraph and bulleted form so one or the other should be kept. [ML148][ML148] [NS149R148]These two chunks have diverged, so they'll need to be combined, whichever format is kept.[NS149R148] I do think that the bullet points as a series of steps to consider is probably easier on the reader. @Nafus, Aleta M and @Yokomizo, Erick J I combined this into one chunk and deleted the duplicates. What do you think about the use of numbers and bullets?[KA150R148][KA150R148] @Krott, Meghan A I like the use of numbers for this list. It seems like from a visual perspective, the next list could be re-ordered and numbered as well[NM151R148][NM151R148] [CN152]This doesn’t seem to have enough context.[CN152] And the tables don’t outline a process… Doesn’t this step fit in step 1b instead.. where they determined the supplemental was needed? Maybe this should be “Return to step 1b and ensure your supplemental indicator and methods will provide the specific data needed to address the management questions. “ [CN153]We should consider how to reference the sample sufficiency tables in this section as well… I haven’t made it to that section yet, but otherwise the idea of “adequate” samples is a bit out of context. [CN153][CN153] Also might consider adding a statement about central limit theorm… Again I haven’t made it far enough down to step 5 to really incorporate or reference at the moment. Analysts can clean up this section please. 4.3.4.2 @Laurence-Traynor, Alexander C @Alexander, Patrick J @Miller, Janet L Ruth[YJ154R153][YJ154R153][YJ154R153] [LTAC155R153]I reviewed and cleaned this up a bit. I'm feeling good about this section now[LTAC155R153][LTAC155R153] [PLJ156]Provide examples of this? [CN157R156]If a LUP design already exists in your FO, then when you want to do a Watershed intensification and need 30 points you can possibly use the 10 that already exist in that watershed from the LUP design… [NS158]I'm very unclear when we should be using "target population" versus "sample frame". Historically, terrestrial has basically exclusively called it a "sample frame" in communication with project leads[NS158] [NS159]Needs a section reference[NS159] I don't think we have a section for final designation aside from the glossary section. We briefly mention it in 5.3.1.4.1 Monitoring Design but not in a way that is very useful for this reference. Could we rephrase this sentence to remove the reference to final designation somehow?[KA160R159][KA160R159] My confusion here is that thus far, I have been filling out these tables. Are PLs supposed to fill these out?[ML161] [LTAC162R161]Yes, I think Project leads should at least provide the first draft. That’s how its mostly worked for terrestrial [NS163]I'm not sure what these four factors are, so they probably need clarification[NS163] [NS164]Need to find the section reference to point to [NS165]There's no defining or explaining revisit designs before this point, which feels to me like it should happen instead of waiting until this step. [NS166]I think this is kind of confusingly written right now and probably unnecessary because it's describing how designs work by default and would probably be assumed to work. Can we just drop it? @Nafus, Aleta M @Krott, Meghan A thoughts on Nelsons comment? [YJ167R166] [CN168]Steve looked at these values, Lotic also has some additional work on this, and I’m sure there’s also literature… should this be something we try to summarize somewhere? Which indicators can be used to pick up trend right away vs which need many years of sampling to pick up real trend? [LTAC169R168]Good point, perhaps its worth citing or linking Steves work here @Laurence-Traynor, Alexander C Do you by chance have the link or citation to link here? [YJ170R168] [CN171][CN171] The last bullet and the third bullet seem to be saying very similar things, could they be combined? Or if leaving separate maybe put them next to each other. [KA172][KA172] [CN173R172]Agreed…. I also don’t get what it’s saying… Is it just trying to say some designs may not be revisit designs at all? [CN173R172] [PLJ174]Appendix?[PLJ174] [CN175]We should at least reference this section in step 4… [CN175][CN175] [NS176]Feels weird to me that we have the point allocation table before this step, but I guess as written it's about looping back to the earlier decision[NS176] [NAM177]Should we add something about whether the points are randomized or targeted? Or a note that the statistics depend on whether the points were randomized.[NAM177] [CN178R177]I think we need to mention in the above paragraph or maybe even sooner, but yes we need to acknowledge the difference between Targeted and Random[CN178R177] Are we still calling it WRSA or should this be changed?[KA179][KA179] [NAM180R179]I think we can just delete- we could add national and local scale if we want to call out the span of available data[NAM180R179] [CN181R179]Agreed delete… [CN181R179] No longer calling it WRSA, calling it National Lotic AIM Why is this sentence in italics? [KA182][KA182] [CN183R182]Not sure, I assume just to make it stand out more? [CN183R182] [PLJ184]Confused about the A and B bullet points[PLJ184] [CN185R184]We should probably say, use the tables (reference below) to estimate sample size and then these are the two steps on HOW to use the tables. [CN185R184] Meghan to discuss with Aleta about adding in all 4 tables or just 1...[KA186R184][KA186R184] [CN187]https://aim.landscapetoolbox.org/wp-content/uploads/2022/05/tableCIs_updatedJan2017.pdf[CN187] [CN188]Should we add any information that we recommend approximately 30 points/5 years for lotic and 150 pts/5 years for terrestrial as a starting place? We could get into explaining with the table and all that but don’t have to either. [CN188] It seems like we should either change this whole section to say that people should annually assess whether their design is, in fact, on track to meet their needs and should, when all points in a cycle are completed evaluate whether it makes more sense to continue with the same design (thereby entering true revisit status) or start a new design because conditions or objectives have changed significantly enough that the original design is no longer useful -- AT the very least we should reiterate that just adding points to a design is not always a trivial thing and may have analysis implications[NM189R188][NM189R188] Is this something Project Leads would know how to do on their own or should they work with the National AIM Team on this? If so, maybe state "in coordination with the National AIM Team".[KA190] Agreed[DJ191R190] I’ve never seen this done to be honest… Lotic won’t have the data each year to do it. But we SHOULD be doing it each cycle when we produce extent estimate at least.. Generally we just use the sample tables to look at worse case 50/50 conditions and central limit theorm to make a guess… We should talk about this as a group… We NEED to be doing it as a whole but every year is likely unnecessary and would cause chaos in our design if we needed to adjust this much. But at least the first 5 years of sampling should be looked at. [CN192R190] Group discussion needed, assigned to @Yokomizo, Erick J @Stropky, Michelle R @Nafus, Aleta M @Reynolds, Lindsay V @Krott, Meghan A [KA193R190] [NS194]4.3.4.2[NS194] [NAM195]Aleta work with Nathan/Nelson to identify what information we need to preserve[NAM195] [NS196]I've seen a few capitalization schemes for these, e.g., Project Leads, Project leads, project leads. We should pick one.[NS196] Changing all to Project Lead[KA197R196] [NAM198]Not really sure what to say here except that it is challenging when people find they are short on points and may create analysis challenges down the road[NAM198] [CN199R198]I added some text. I agree[CN199R198] [CN200]This section is missing, we have text on the website but not sure where it went…. Added text from website. Needs checking for current accuracy and riparian and wetland input as well.[NM201R200] @Stropky, Michelle R can you review for R&W @Krott, Meghan A is this good for Lotic? [YJ202R200] That works for lotic! Except that the link for lotic needs to be updated but it won't let me change it for some reason. It should be updated to this permanent link: https://www.blm.gov/sites/default/files/docs/2022-03/Lotic_DataManagementProtocol_2022.pdf[KA203R200] I updated the text for R&W and had the same trouble Meghan had with links! Weird. But it let me add the long ugly URLs in line, so at least we have those in there now.[RV204R200][RV204R200] Do we still want to reference and link to landscape toolbox since that will be going away eventually? @Nafus, Aleta M @Yokomizo, Erick J - group discussion??[KA205][KA205] No. I don't think we need a discussion on this. We shouldn't be referencing the landscape toolbox anywhere anymore. I've updated the text and links.[RV206R205] [NS207]We're calling these sections, not chapters, right?[NS207] [KEJ208]Terrestrial also has a rejection protocol, right?[KEJ208] [KMA209]Need to check in with leads to see if we want all of this info in section 3 or if we should remove some of it.[KMA209] Is this a realistic timeframe? Our contracts start ~9 months prior, NOFOs are 12 months (at least this year). [CN210][CN210] I would say, 3-6 mons before data collection starts. This year, they were still trying to get crews hired 1-2 weeks before.....[RV211R210][RV211R210] It makes no sense to have this section here - Need group conversation about remote sensing sections and whether they can be optional or entirely removed [NM212][NM212] [KEJ213]This is Shannon's language, but simplified by me so that it is consistent with current workflows.[KEJ213] [CN214]This section seems like it would be beneficial to come before the section above… This is giving context to why point eval and rejection is important, what you do with it, etc. [NAM215]I can’t remember – do we have a discussion about random and targeted points and the benefits/disadvantages and goals of each? If not, we need one. If so, this discussion should be changed so that it focuses on data collection of random points and why it is important to maintain the spatial balance as opposed to a description of probabilistic monitoring design statistics. – much of this paragraph should occur in the creation of the monitoring design. I don't remember reading a discussion about targeted versus random and benefits/disadvantages of both. It would be good to add it in. [KA216R215] [CN217R215]I definitely agree we need to acknowledge the multiple “appropriate designs” they can use for monitoring and we need to specify how each step does or doesn’t relate throughout the document… Rejection criteria for one type of design may be different than another… This section starts to get into the generic approach for the random designs. I actually think this paragraph is good intro to this preparation step. We have something similar in our design management protocol that talks about familiarizing yourself with the design.. but agree it needs to also be addressed in the design section a bit if it’s not. [CN218]We don’t remove, we use them to adjust[CN218] [CN219]We don’t know this in many cases because we haven’t gotten there… This is nuanced though so if we want to keep it as “is still” I don’t think it makes a huge difference[CN219] [CN220]This starts to get into how to trip plan… [CN220] [KEJ221R220]It was hard to separate them 7 years ago, and it STILL is! I remember a lot of long conversations about this language.[KEJ221R220] [NS222]This has been capitalized a few different ways throughout the document[NS222] [CN223]We provide this guidance in our design protocol. I don’t think we want them contacting us every time right? Does T and R&W have this documented somewhere? Maybe we add another sentence here? [CN223] Balancing logistics and travel efficiencies with sampling in order can be tricky, the goal is to avoid spatial patterns in the data, but also ensure that by the end of a field season all holes have been filled. If this becomes too difficult reach out to the National AIM Team for more guidance. I incorporated Nicole's suggestion above. Needs reviewed by T and R&W still[KA224R223][KA224R223] [CN225]Term discrepancy. We use Samples and Not Sampled for Lotic. [CN225] [CN226]Interrupter, needs a comma afterwards. Check throughout document. [CN226] [CN227]Terms… Lotic doesn’t really get into this as much as we use to during this step- it’s mostly more in analysis that we use these high level categories. It’s obviously target if it’s sampled… If it’s one of the other things we need more information than just its designation so we now combine with our “reason not sampled” which has a bunch of categories… I’m not sure if terms will cause any confusion or not, but I’d like to hear updates from other resources on how they use this. Assigned to @Nafus, Aleta M and @Reynolds, Lindsay V for review and maybe group discussion if needed[KA228R227] Yes, we should discuss this while looking at our design management protocols. Like lotic, we have "sampled" and then a bunch of not-sampled reasons. We should refine the language here together so that it is broad enough to cover all resources.[RV229R227] [CN230]We switch on and off with tense of the steps… Should these all be changed to look, get, go, etc. and change the heading? [CN231]This is going above and beyond, it’s the most expensive part of sampling, why tell them to do it more than needed, I don’t think this should be in the includes list… [CN231] [CN232]We don’t provide reasoning for any other bullet, I suggest removing or providing the reason for each above this one. [CN232] [CN233]See section above about Best Practices for implementing a monitoring design. Some overlap information, could combine and reduce. [CN233] [CN234]Formatting of this list is off… [CN234] [CN235]I think we can simplify this section by taking the resource neutral information and putting it up front. The purpose of the trainings: proficiency and consistency in methods… etc. Then we can reduce the repetition of this. We can also make a Instructors’ training section resource neutral and just mention R&W as a future idea? [CN235] Also do we want to mention that field training consists of both virtual and in person? [DCJ236]Should we mention something in this section about roles that require periodic re-training?[DCJ236] I like this addition, all resources have different rules on frequency of attendance depending on audience. [CN237][CN237] This description and the R&W description should match better in content of the description. [AN238][AN238] Can Project Leads discuss and decide which description they want to follow for all resources? I agree that we should probably have all of them match up similarly in style. I modeled lotic after a mixture of both R&W and terrestrial for now but can change as Leads see fit. [KA239R238][KA239R238] [CN240]This section really focuses on having dedicated crews so really just the LUP type sampling… how about other programs that might be using field staff to collect data? Can we make this more generic overall? For example instead of project leads reviewing with crews… maybe just say field data should be reviewed when data collectors return from the field so that resource specialists can be consulted or AIM team as needed or something? [CN240] During the planning and initiation review we did talk about how data can be collected by BLM staff or crews, so I think we should acknowledge throughout right? [CN241]Gear purchasing would be prep right?[CN241] [DCJ242]Maybe say “by the end” or “no later than the end”. Its OK to document things as you go. Don’t want to imply that such documentation should/must wait until the end of the hitch.[DCJ242] [CN243R242]Yes, they should really document as they go. They upload during or at the end, but maybe we just make it generic. [CN243R242] How about suggested track changes? Looks good. [DJ244R242][DJ244R242] Project leads and/or crew managers? I think for a lot of the contract crews, the crew managers do a majority of checking in following each hitch. [KA245] [CN246]We only have one month and end of season. [CN246][CN246] [NAM247]I think this is generally true for all resources that it is helpful, just whether or not it gets submitted to the NOC might be a question. [NAM247][NAM247] [CN248R247]With our new Webmap tracking and end of season check in/table it can really be hit or miss as to if this is worth the time and money. I would rather them put all the info into the tracking on the Webmap than have another doc that spells it out. Maybe we can autogenerate a report from the webmaps? [CN248R247][CN248R247] I DO think it would be helpful if these end of season reports could highlight general themes they encountered! For example, we ran into many reaches that were sampled in the past but were not impacted so much by beaver activity we were unable to sample.These larger patterns would help at the local level and the NOC level. Interested to see a terrestrial report. In the past all they did for us was copy similar info from our Webmap eval info into a document and then say how many sites they sampled, rejected, etc. which isn’t helpful. They did add photos, but again these could all be likely autogenerated from the webmaps these days. As soon as you can, please add text to this section (not comments) in your resource. This is just basic information regarding Electronic Data Capture and Data Management ( we can reference data management protocol documents). But if there is critical information that belongs here can you please add it. Thanks! [YJ249] @Redecker, Nathan P @Shank, Logan T @Scott, Julian A Lotic sounds good to me[ST250R249] I don't think these paragraphs need to be broken out by resource. We all do it the same: Field Maps and Survey123, bring paper sheets as a backup, and refer to resource-specific data management protocols for more details...[RV251R249] [CN252]We do not use this anywhere else in the document[CN252] [DCJ253]“reaches”, “sampling locations” not sure what the preferred generic/all-resource term is. I think I recently heard that the term might be “points”[DCJ253] [CN254R253]Yes “points” can be used for resource neutral, especially because they might not actually “sample” a location because it might get field rejected. [CN254R253] However, just “point” in this case sounds a bit… odd. If we put design in front will it sound too much like “random design” and not be interpreted for targeted and random? [CN255]We also collect not sampled locations[CN255] [CN256]This is pretty specific and if not familiar you’re left wondering what these are. Made suggestions[CN256] [DCJ257]Does it bear repeating here that crews should navigate to the sampling location, acquire the GPS coordinates, and launch all Survey 123 forms from Field Maps? Failure to do so causes considerable QC problems. [DCJ257] [CN258R257]Chris we’re on the same wavelength, I read this comment after suggesting super similar edits! [CN258R257] Add ingestion rules (generic and maybe resource specific)[NM259] ex: training, calibration, minimum data methods requirements (when applicable); and other rules [NS260]I know that inconsistencies are because the text is coming from multiple documents and plenty of individuals are editing, but we should decide on a capitalization scheme for these two[NS260] [NAM261]Do we pull the text out of the data management protocols or link people to the data management protocols?[NAM261] [DCJ262R261]Also, with the new availability of in-season QC data should we discuss appropriate and inappropriate uses for it?[DCJ262R261] [CN263R261]I feel like I’d need a reminder of what we were planning to add here… Is it getting at what QC we do after they collect data? Or is this QC after they have data in the database but before they just jump into using it.. I’m assuming it’s during the field season, but we actually start getting into this a couple paragraphs above talking about Project leads/crew managers check in with crew after hitches… [CN263R261] [CN264R261]In general I am a huge supporter of making broad statements in this document that give enough info that people can feel good about the process/workflow and details but to reference other documents for the nitty gritty. I do think we can pull some text though because we need enough info for those that do not actually ever need to go to the individual documents but need to feel confident in using data or supporting AIM in their office.[CN264R261] [KEJ265]I don't see this text in Shannon's document [KEJ266]In Nov 2022 the Core team discussed focusing this section on this document (which builds on other workflows, but is still generalized unlike TN453): Analysis and Reporting Workflow_Revisited_2021.docx [NS267]Capitalize?[NS267] [CN268]Not sure the best way to say this but I think it’s important to highlight up front [NS269R268]I tried to help this sentence out a little bit [LTAC270]I'm assuming this means NOC analysts? May need to specify given new state analysts [KEJ271]Add link [KEJ272]Add link I think this could easily be mis-interpreted as percentage of the landscape so maybe best to avoid?[LC273][LC273] Should these all be bulleted under the question at the top or are they instead different types of analysis that we are trying to describe? It seems like they are other analysis types but the way it's formatted it reads as if they are trying to answer the top question. Consider reformatting.[KA274][KA274] [CN275]Consider how the tools sections throughout document will be set up. Are we just listing them? Are we providing a description? Either way currently this section can be paired down a bit to be less overwhelming. [CN275] Also we might need to consider if an appendix listing more info about the tools might be appropriate or not… see the bottom of the A&R workflow document I liked in my comment above. [KEJ276]See this document which also has a list and a description: Analysis and Reporting Workflow_Revisited_2021.docx [KEJ276] [LTAC277R276]Do we need to make the workflow in the above document into a figure to put here? It seems redundant to copy across the entire table[LTAC277R276] Adjust to match text[NM278] [NAM279R278]\\blm.doi.net\dfs\loc\EGIS\ProjectsNational\AIM\Resource Neutral\Analysis & Reporting\Communication Docs\Analysis Workflows\AR_Worflow_2022.pptx Also - remember to check Emily's A&R tools workflow[NM280R278] so old projects should still look at MDW correct?[PJ281][PJ281][PJ281][PJ281] they should be! I think again, we're trying (but failing) to speak to the opportunistic uses. [CN282R281][CN282R281][CN282R281][CN282R281] [LKS283]Mention gathering data from other sources as we go along; encourage using other tools and data – “your analysis is likely to include other monitoring data from a variety of locations” (other AIM data in your office and other data such as …)[LKS283] LOOK at your data before you begin (data best practices) Flags about data reliability / being aware. Refer back to QC section for best practices [LTAC284]Added this from the AIM IM attachment as there is no place else data access is described in this document. Feel free to remove if its too much[LTAC284] [NAM285]We should have a document that contains this and addresses timelines sometimes later depending on state and unforeseen bottlenecks[PJ286][PJ286] [PLJ287R286]I’ve heard from many practitioners that having the data sooner would help facilitate more use in the range program (deadlines to complete docs in many FOs is the end of the calendar year. Many don’t feel the data is as useful a year later).[PLJ287R286] [LKS288]ditch timelines[LKS288] [PLJ289R288]agree![PLJ289R288] I don't know that this is still true?[NM290][NM290] [CN291R290]Even if it is I don’t think A&R is the right section for timelines and contents of databases right? [CN291R290] Also there was talk that we would still delay but I’m not sure where that landed. I'm confused by this thread, did something get deleted that this if referencing? @Nafus, Aleta M [KA292R290][KA292R290] [NAM293]Delete or reword to be more resource neutral ex: start with the metadata and then consult with State and National AIM Team members to get further clarification [CN294R293]Hmm.. this just made me realize we need to consider who we are providing contracts for and who we are not, or where we are saying people can find those contacts… Especially consider external vs internal. detail see comment above[PJ295][PJ295] This is a tricky spot in the sense that if we are asking them to set benchmark values and benchmark values are a required component of monitoring objectives, should monitoring objectives be refined in this step or at least mentioned?[ML296][ML296] [LTAC297R296]Good point. I added a small paragraph at the end of this section mention objectives and tying to the design section. Do you think that is sufficient?[LTAC297R296] Yes, looks good![ML298R296] @Laurence-Traynor, Alexander C Swing at benchmarks vs no benchmarks conversation[NM299] @Laurence-Traynor, Alexander C I added some content here. Read through and se if you think it is appropriate.[ML300][ML300] Yes, it looks good to me![LC301R300] detail see comment above[PJ302][PJ302][PJ302][PJ302] @Laurence-Traynor, Alexander C @Miller, Janet L Alex and Aleta discussed - Rework section to introduce analysis types, create a workflow -match https://doimspp.sharepoint.com/sites/ext-blm-oc-naim/Shared%20Documents/Resource%20Neutral/Implementation/Implementation%20Team%20Meetings/AIM%20Handbook_Website%20Content%20Revision/AR_Worflow_2022.pptx?web=1 and https://doimspp.sharepoint.com/sites/ext-blm-oc-naim/Shared%20Documents/AIM%20Projects/Agendas%20and%20Working%20Docs/CoP%20for%20AIM%20Data%20Analysts%20and%20Data%20Users/Analysis%20and%20Reporting%20Workflow_Revisited_2022.docx?web=1 with any decisions made[NM303] can we provide guidance for which analyses might be using results from benchmark tool directly?[PJ304] suggest referencing benchmark tool and location but not worry about version and date inthis doc for durability sake[PJ305][PJ305] Can we make this a little more generic so that it doesn't exclude riparian and wetlands? Or do we know what the Wetlands benchmark tool will do (after XXX date)? or do we just want to revise this chapter when Riparian and wetlands goes live for analysis?[NM306][NM306][NM306] Yeah we don't have anything close to a benchmark tool... maybe we will start sometime next year once Ruth can shift back to Analysis full time so I would vote for your third option, keep the Benchmark tool text here specific to T and L, and then revise when we have our analysis tools.[RV307R306][RV307R306] Ideally we would change plots to plots/reaches or points. Also, lotic specifies that weighted analysis should be done using random points not targeted.[ML308] [LTAC309R308]Updated! Can you compare data in one plot at different time steps? Can you do a change over time analysis on one plot? I think you can...[RV310] I think this could easily be mis-interpreted as percentage of the landscape so maybe best to avoid?[LC311][LC311] I think this could easily be mis-interpreted as percentage of the landscape so maybe best to avoid?[LC312] suggest giving a little more detail here about what kind of inference a targeted plot can give vs a random plot or if done later in the doc referencing that it will be addressed later[PJ313][PJ313] I added detail from the project leads weighting presentation on targeted vs random points[LC314R313][LC314R313] plural tools. how does each one help? it seems like they have slightly different functions based on nelson's recent presenation...[PJ315][PJ315] I think the plurality here is referring to the terrestrial and lotic excel tools – I added to this to make it more specific[LC316R315][LC316R315] [NAM317]Am I just blanking on a similar section that addresses and talks about plot counting approaches? It seems like we say regardless of which approach you are using you can do reporting but we don’t actually introduce other options? Or did I just sleep read through that section? This sentence is confusing to me... is there a way to say this better? We could simply say "Contact the National AIM Team for assistance with weighted analysis." Or something along those lines.[KA318][KA318] examples or description of what this means could be helpful[PJ319] [LTAC320]Refer to powerpoints from interagency science call [LTAC321]Use exampleS here: https://doimspp.sharepoint.com/:p:/r/sites/ext-blm-oc-naim/_layouts/15/Doc.aspx?sourcedoc=%7BF913AC7F-B177-4A0A-89CE-917AB7235DF6%7D&file=Session5_3_Figures_Reports.pptx&action=edit&mobileredirect=true [LTAC322]@Miller, Janet L I feel like this section is a little sparse, check it out and pls add anything additional if needed, Thanks! [LTAC323]I think this could easily be mis-interpreted as percentage of the landscape so maybe best to avoid?[LTAC323][LTAC323] Riparian & Wetland AIM -- think about what terms need to be added. One thing that comes to mind is the database name and description[PJ324] I think we are getting rid of "AquaDat" and "TerraDat".... But yes will add relevant terms...[RV325R324] Assigned to @Reynolds, Lindsay V to add relevant R&W terms[KA326R324] [PLJ327]Is AIM officially a program now? Or should we change to “strategy?”[PLJ327] [NAM328R327]It is a program, strategy and set of monitoring methods all at the same time – should we put that in 😉[NAM328R327] [CN329R327]Agreed I’d also expand to “… set of monitoring methods and tools”[CN329R327] Still correct language with pivot away from "oversample?"[PJ330][PJ330] [CN331R330]Who’s pivoting away from oversample? [CN331R330] I do struggle with this wording… We use base within year vs across years somewhat differently but really the same… “Can we just say the points in a design which are intended to be sampled” maybe add ”… to meet sample sizes” ??? [CN332]I’m leaning toward no, especially given that it’s not super clear… Like we might meet benchmarks for plant density at most of our plots but we might not trigger a management action until X number or percent are not meeting… which gets into thresholds… If we give an example we need to make it super simple. Plots in Sage grouse habitat must have X percent sagebrush cover. Lotic’s example is equally as complicated for a glossary… [PLJ333]Would it be helpful to change this to an example using a Terrestrial Core Method applied to a benchmark one might see in a grazing permit renewal? [CN334]Of course all benchmarks will vary based on indicator…a pH value of 9.5 set as the benchmark MUST be different than the benchmark I give conductivity because they are completely different data? Even between something like sagebrush cover and annual grass cover, to me this is logical that these would vary. [CN334] Do we mean that benchmarks for a given point/location vary by potential? Thus benchmarks should be set considering potential. Benchmark groups, or grouping by sites with the same potential can be helpful…. This sentence just seems overly complicated for the message we’re trying to get across. [CN335]Have we defined previously?[CN335] Do we need to call out renewable? I don't know that having it say renewable resource versus resource changes the meaning. [KA336][KA336][KA336] [DCJ337R336]Agreed. And I think there may be some debate over the use of the term withing BLM. Emily or Zoe may have context on this.[DCJ337R336] [CN338R336]I’d check the citation too, we might just be trying to keep the same language.[CN338R336] [CN339]I’m getting myself wrapped up here… but I think we are confusing CI and CLs in the beginning of this statement… Someone correct me. [CN339] confidence intervals aren’t 80% our confidence LEVEL is 80%, which can result in a CI that is very small or very large in range, it depends on our data. But if our CL is 80 % then this indicates that 80% of our sampling events will results in estimates that fall within the CI range. Right?.... @Reynolds, Lindsay V@Nafus, Aleta M I think I changed it to what it should say but @Nafus, Aleta M, please review as well.[KA340R339][KA340R339] This seems like a pretty broad statement to throw out there. Maybe just leave it as "See Elzinga et al. 2003." Or provide an example of a statistic book with a good explanation of confidence intervals.[KA341] [CN342]Do we need to specify that we teach, manage, and support these data like we do for Core? [CN342] @Krott, Meghan A also take a look and compare to our field protocol to make sure things align fully Should this be "Contingent Method" instead of indicator? Same for below as well in regards to core?[KA343R342][KA343R342] [CN344]I think this is repeated in the next sentence? [CN344] [CN345]Does this include structures workflows for things like QAQC, data collection, etc. or just the organizing and storing aspects? [CN345] [PLJ346]Do we want to keep DIMA in this doc given that we don’t use it anymore for AIM and Jornada is no longer supporting its upkeep?[PLJ346] [NAM347R346]We still have at least 1 FO collecting on DIMA and several DIMAs being unearthed and submitted for ingestion.[NAM347R346] [CN348R346]I think we should keep for now because it’s still a “term” that people use a lot. [CN348R346] [CN349]Do we define anywhere as an acronym? [CN349] Now called EDIT[PJ350][PJ350] [NAM351R350]And now maybe??? Changing again???[NAM351R350] [DCJ352R350]Tthat could be confusing since we sometimes internally refer to the AIM EDT database as “edit”[DCJ352R350] [CN353]This is just one of the many levels of status or designations we have… Lotic definitely uses these categories when it comes to analysis, but we have a higher level and a lower level status as well. looks like the end of the sentence was cut off? this is how it appears on the website....[PJ354][PJ354] upon further thought I realize this is defining just a point being sampled. Is this the current language in SDD? Target sampled seems confusing with the term Targeted Plot[PJ355R354][PJ355R354] Is this still applicable with the change in sample designs (like how oversample is not a thing anymore...)[PJ356][PJ356] Changed to BLM-managed lands instead of public lands since not all public lands have Land Use Plans (or describe them the same way) and we are discussing them in relation to BLM management.[KA357][KA357] [DCJ358]Add entry to the Glossary?[DCJ358] Rethink wording to call it National AIM?[NM359][NM359] Changed to BLM-managed lands instead of public lands since not all public lands have Land Use Plans (or describe them the same way) and we are discussing them in relation to BLM management.[KA360] [NAM361]Should we add that it no longer seems to be used? [CN362R361]We use it where applicable, but we don’t have the typical benefits from the MS that we thought we would… it’s more of a convenience. [DCJ363R361]This seems like it could be a sample design easy button for some folks. This is the first I have heard of it. Maybe it’s not being used much because its not promoted enough? [CN364]Needs to be reviewed and likely updated. I think it needs to focus more on the science partner support role they plan for AIM these days over the idea of leading AIM monitoring efforts.. For example we might steal language from CNHP description. CNHP:[CN364] Colorado State University’s Colorado Natural Heritage Program provides science support for the Riparian & Wetland AIM program through research and development of the field methods protocol, training support, data stewardship, indicator development, and sample design and analysis support[CN364]. @Nafus, Aleta M and @Cappuccio, Nicole I just added this. Do we have a real definition somewhere? Please check me here....[RV365] Fuzzy gray area - are HQ AIM part of the National AIM Team?[NM366R365] [DCJ367R365]Concur with Aleta. I’m not clear on how HQ AIM is related to NOC AIM. My understanding is HQ AIM staff are part of the National Team. I added that.[RV368R365] [CN369R365]Nice add! I don’t think I’ve see a real definition anywhere. I’m curious how Melissa see’s HQ or Emily/Zoe see themselves here? @Dickard, Melissa D any thoughts here?[YJ370R365] Should we bring this to a Core Team? [RV371R365] [DCJ372]medium resolution what? dataset? imagery?[DCJ372] [CN373R372]1:100,000 scale, there’s a high resolution 1:24,000 scale too[CN373R372] [CN374]Is this helpful without considering context? Management vs monitoring for example, which are both defined here. At least for Terrestrial, it seems we have moved away from this term. If so I would say potentially keep it depending on how much we discuss how the program has changed. Do we still use this term frequently?[PJ375][PJ375] [NAM376R375]Unfortunately we use this term all the time to mean 2 different things so it needs to better reflect the fact that it has context rather than deleting[NAM376R375] [DCJ377R375]Do we need to define another term for one of the two cases?[DCJ377R375] [CN378R375]We also use this term. We haven’t had the level of confusion that terrestrial has had, but I really think the term itself can be defined a bit more broadly… maybe “Points to account for failures/rejections of base points to ensure we meet sample sizes” [CN378R375] Or something generic that doesn’t necessarily talk about replacing base points or within/among years… OR we hit it directly head on… Talk about base and over when we draw points (among years) vs using these terms within years where some design base turn into yearly oversamples. [CN379]In the above info somewhere we actually talk about how reference can be defined many different ways… but we don’t have it in the glossary or reference it here. Is this an issue or okay? [CN379] [CN380]I wonder if we should add this as a definition? We all collect this type of information as part of our methods, right? But we use the term covariate more than physiographic properties… [CN380] Still true?[PJ381][PJ381] [CN382R381]For lotic, yes still just a few. [CN382R381] [DCJ383]Is this phrase necessary?[DCJ383] [CN384R383]I agree no, it also creates an incomplete thought. Used for what? If I look on the website it actually has an “is” between the that and can so I’m wondering if we just never finished this sentence… [CN384R383] If we want more I say we link it back to being used to address management goals, or adaptive management or something broad. [DCJ385]Different from what?[DCJ385] [CN386R385]I think this means it can have “various” reporting units. [CN386R385] [DCJ387]I’m not sure I understand more about reporting units after reading this definition...[DCJ387] [CN388R387]I don’t know if I can suggest something right now, but basically we have the project area, within that we have the areas where we need to report out on the conditions and trends (reporting unit). So you can have multiple reporting units withing a project/study area. [CN388R387] EX: Salmon FO is the study area Reporting units are Salmon FO, Lemhi Watershed, Sage Grouse Habitat, Bull trout habitat. Strata might be something like streams with Bull Trout and streams without bull trout. [CN389]Interesting that this also doesn’t mean the set of points intended to be sampled for specific management objectives/goals and it’s focused on all the planning and specs of the design…. [CN389] I haven't heard these terms used before, was this a point of confusion in the past?[PJ390][PJ390] [CN391R390]Goes to address why the MDW name sometimes confuses people it’s more of the monitoring plan, but we often use sample design to refer to the actual points (which includes details on sample size and strata)[CN391R390] @Nafus, Aleta M is this still true, do we need this sentence about "term can be used interchangeably [YJ392] [DCJ393]Is this a citation?[DCJ393] [CN394R393]I actually think this is referencing our MR tool going back to our Master Sample… I don’t this is needed.[CN394R393] But some examples of the geospatial layers might be helpful. See edits is SARAH mentioned in this document? if we speak to past tools then keep, but maybe, like DIMA, we should remove reference?[PJ395][PJ395] [NAM396R395]I could find no other references to Sarah. We could leave it in and add that it is obsolete as of 2021? Or remove.[NAM396R395] [DCJ397R395]I could see a brief discussion of DIMA and SARAH being helpful to readers that are new to AIM because at some point they will likely hear these referred to in email or conversation. However, I do not feel strongly about including that if others disagree 😊[DCJ397R395] [CN398R395]I’m fine with removing, unlike DIMA it’s completely retired in all aspects. [CN398R395] [CN399]Do we need to define this too… [CN399] [CN400]?[CN400] [DCJ401]“get sampled at the appropriate intensity”? [CN402]This is done with benchmarks or benchmark groups, not with strata… @Laurence-Traynor, Alexander C @Miller, Janet L please review and update as needed[KA403R402] [DCJ404]Not sure what this means...[DCJ404] [CN405R404]After sampling the original design we didn’t have enough points to meet our sample size needs so we drew more points to “supplement” the original design[CN405R404] [LTAC406]@Miller, Janet L May want to check this definition is consistent with how lotic uses this term. Please edit as needed. [LTAC406] [DCJ407]For clarity, maybe contract with Core Indicator.[DCJ407] [CN408R407]We should at least state that AIM doesn’t have standard methods, training, or data management processes for these but they can be sampled along with the AIM methods. [CN408R407] [DCJ409]How does Study Area compare to Sampling Frame and Reporting Unit?[DCJ409] [CN410R409]Sample frame is the GIS layer, or list of items to draw points from… the sample frame is encompassed within the study/project area. [CN410R409] Multiple reporting units can be within a study area/project area. [CN411] I think it’s going back to the MR tool and website… https://www.monitoringresources.org/ They do have a glossary. [DCJ412]I’m not sure what “state level” means. I assume it means data is available on a state-by-state basis. But I don’t think that’s accurate. So not sure what needs to be said here. Also, a fair amount of time and effort goes into publishing TerrADat each year so maybe giving more context about its value/usefulness is warranted?[DCJ412] [DCJ413]For example, the area of a strata divided by the number of points in that strata is used to account for the influence of the points for each strata in analyses. (Or something like this. Do I have this correct? Or is this redundant with the rest of the definition?) [CN414]I think this is redundant. I think if we want an example maybe we just give an example with numbers If there are 100 stream km in strata A and 50 points, each point has a weight of 2 stream km, If there are 25 stream km in strata B and 25 points, each point has a weight of 1. Points in strata A have a higher weight and so each point has a larger influence on results compared to the other strata. Surely there are more citations than just these two. What about "Monitoring Resources 2017" that was referenced several times? And Elzinga et al? Also NRCS 2017. We need to review entire document for all citations to include.[KA415] [KEJ416R415]Yes. We should probably reference the protocols also...TN453...IIRH version 4 and 5 (Pellant et al 2021ish)...the new LUP workflow. All manuals, protocols, and TN 453 have been added. Still need to add IM 2009-007, IM 2016-139 and NRCS 2017. [KA417R415] I'm not sure if this is the correct way to cite this document since it is still a draft. @Reynolds, Lindsay V can you review and update this citation?[KA418][KA418] This works fine. I will update once we have a TR in press.[RV419R418] Need to add these [KA420] [CN421]NC not reviewed yet[CN421] When y'all are ready to review this, ask me for the latest version of the AIM Manual. [KJ422] I added in R&R from the draft AIM Manual for: National Program Lead, NOC and State Monitoring Coordinators. Need to review as a group R&R for the other components that aren't in Manual and also review NOC and State Leads to ensure they are complete. Assigned to implementation team to discuss on wrap up call.[KA423R422] [NC424]How about ideas about: Connecting BLM to science Ensuring monitoring is meeting needs of BLM Scarce skill specialists and customer support We cover some of these in some of the bullets but is there anything we can make more clear? Should this be changed to National AIM Team instead of NOC? I'm not sure when we should be using NOC versus National AIM Team but we should probably review entire document for consistency in this.[KA425] [DCJ426R425]Agreed [NC427R425]I do wonder there is benefit to separate out how we differ from our science partners? I’m not sure but something to consider HAHA I just read the next comment from Lauren. @Nafus, Aleta M@Reynolds, Lindsay V what are your thoughts on this idea of incorporating or calling out differences? @Cappuccio, Nicole sort of like subullets with National AIM Team, NOC and Science partners and then maybe sub-sub sections or parentheses identifying where we overlap and where we don't with as it applies differently to each resource? I can see that being useful to communicate[NM428R425] [NC429R425]I like that idea of subbullets. Also see comment on the next group of people… maybe they actually need to be a sub bullet section of the NOC too depending on who all this group includes. [PLJ430]It might be helpful to separate out the partners into their own section. I know a lot of people are unclear about our partner’s role and it would help to clarify! [DCJ431R430]Seems like a good idea First 4 bullets from AIM Manual, last 4 bullets from previous editions of Desk Guide. Needs to be reviewed by group @Nafus, Aleta M @Yokomizo, Erick J @Reynolds, Lindsay V [KA432] Looks good. I added a little clarification in the training bullet.[RV433R432] This would be NOC[NM434] [PLJ435]A little outdates as not all states have SG [NAM436R435]This is the Anthony Titolo type position – is there a comparable position for non sagegrouse monitoring efforts? [NAM437R435] Nika’s position is part of the AIM Team right? As an AIM/Range liaison? If so, what do we classify her as. Is she one of these or does she need her own space. And then Shannon’s previous position is part of the AIM Team too right? An AIM/Remote Sensing liasion [NC438R435]This section clearly needs some major revamping… Even Anthony always participated in our higher level AIM team stuff… If it’s just Anthony and Nika how should we treat those in this document, are they the “audience” for any of our sections? Do we reference them as people that need to be consulted? Bacially why are we calling them out? Do we need a sub bullet in the NOC section? First 9 bullets are from AIM Manual, last 8 bullets are from desk guide. Needs to be reviewed by Implementation team![KA439] [NC440]We want them to filter down communication from the NOC to the Local PLs, I’m curious if we can say this more clearly. We also want them to filter information up to us about NEEDS for monitoring to ensure we are moving AIM in the right directions… @Nafus, Aleta M @Reynolds, Lindsay V Did we have a chat about SL’s roles on one of the core team calls? Or maybe when we were considering the purpose of the SL’s call and what we wanted out of it? [NC441]As defined, I don’t think they are doing any QA, just QC right? [NC442]Does terrestrial still run this through state leads like it use to be? Can we update wording to be more that they should be ensuring QC is complete? Do they still “submit” to the AIM team? Lotic doesn’t really do this in the same way… if we can be more general it could be good [NC443]How about provide supplemental methods training and support? Or is that SLs? [NC444]“Establish” sounds odd…Are we getting at hiring? This is listed in the Sl’s section above. Some of the contracts require contractors to provide their own field gear. Added this line for clarification.[KA445] [NC446R445]Also with the funding at the state level it seems like when states purchase gear it’s actually being done at a higher level. [NC447]We could separate this out and make more general if PL’s should be at minimum reviewing Calibration data for terrestrial and ensuring it happens? Or is this SL? [PLJ448]Still true? Is this really just for terrestrial only? Lotic project leads also do QA/QC.[KA449R448] [NC450R448]It’s the submit to state office part of this step that is not Lotic. [NC451]Do we mean Apply data to management objectives for use in management decisions? Our resource specialists are not the ones usually MAKING management decisions from my understanding and they are not just interpreting data, they need to analyze it, this is just oddly stated and can IMO be more strongly and directly states. [NC452]Mixing verb tenses… planning, vs calibrate, etc. Also we should be consistent throughout this entire list of R&R- check all. [NC453]This bullet needs an action. Check all bullets to ensure it’s clear what action we expect about that topic Doesn't final Data QC assume it would be at the end of the field season? I don't know that we need to call out "especially at the end of the field season". [KA454] [NC455R454]Also confusing because we say this is a SL’s role too This bullet seems like a repeat of the 4th bullet "Final data QC", could we combine them into one bullet that addresses QC?[KA456] [DCJ457R456]Recently there has been discussion about stressing the importance of data QC periodically during the field season such as at the end of the day and or hitch. Maybe talk about these separately and emphasize both their importance and their differences Updated formatting to match other position titles.[KA458] This bullet seems like a repeat of the 4th bullet "Final data QC", could we combine them into one bullet that addresses QC?[KA459] [DCJ460R459]Recently there has been discussion about stressing the importance of data QC periodically during the field season such as at the end of the day and or hitch. Maybe talk about these separately and emphasize both their importance and their differences [NC461]Copied from above… I feel like data collectors need to do this as well. ESPECIALLY if they are not part of a full blown AIM crew. [NC462]Why do we spell out here and not anywhere else- check document for these types of inconsistencies[NC462] Crew Hiring is in 5.3.1.1 move there?[SR463] This section is out of place and needs to be reviewed and moved. Options for movement: Design Section or an appendix that gets referenced several times throughout? Check TN 453 for repeat language. [NM464] Agreed. This is a precursor to TN453 so it's not surprising that some language is similar. [KJ465R464] Whole section will be reviewed by analyst, including review on TN 453 for repeat language and consistency @Laurence-Traynor, Alexander C @Alexander, Patrick J @Miller, Janet L and Ruth[YJ466R464] I like moving most of this to an appendix. Also, setting benchmarks might go hand in hand with developing monitoring objectives? Just trying to think of where this belongs. In an event, I suggest reducing this to defining a benchmark, instructing them to determine their benchmarks, mentioning the primary methods of doing that (policy, percentile of variability from reference sites, predicted natural conditions, using other AIM data, peer reviewed articles, best professional judgment), then referring to an appendix for the nitty gritty.[ML467R464] Does lotic have a good example?[SM468] @Miller, Janet L, do you know about lotic examples for this?[KA469R468] I added an example - but not wed to it![ML470R468] This is a debatable statement… potential misleading… Would like Jennifer and Sarah’s input. I understand the broad idea we’re trying to convey but needs to be more clear. [CN471] I had to read this sentence several times to understand its meaning. If we want to keep it, I think reordering it a little may make it clearer – or even replacing “this is desirable” with “more protective (conservative?) benchmarks are desirable” [AN472R471] But it might not be more protective or conservative than other methods at all… Maybe we talk about considering the accuracy and precision of the model and how this could impact interpretation? I’m not sure this fully solves the issue though, but I do think we should acknowledge A&P as something that should be considered in using models… [CN473R471] restructure/improve language of this bullet point. Sarah Jennifer and @Laurence-Traynor, Alexander C @Alexander, Patrick J @Miller, Janet L Ruth[YJ474R471] Feel free to edit. Missing from this section are physically based process models (e.g., AERO, RHEM) which predict potential future responses given current conditions and will inform if a functional threshold has been crossed or not[SM475R471] I changed a word but am fine with this statement as is Jennifer[ML476R471] Have Jennifer and Sarah review and reword. Include language about accuracy and precision in models. See Nicole's comment on this section for more details.[NM477] I think it would be really helpful to have a figure along side this example to illustrate the point, is that possible?[RV478] Yes! We have some good figures in the Lotic reports and presentations. One of the AK reports even has a photo per condition category which makes a nice visual of the differences. [CN479R478] There are 2 figures in Appendix 2 of TN453 that could be reused here. [KJ480R478] Add figures as stated above; eval for appropriateness and insert. @Yokomizo, Erick J @Stropky, Michelle R @Krott, Meghan A [YJ481R478] @Miller, Janet L do you have some figures we could use for this example?[KA482R478] @Krott, Meghan A @Yokomizo, Erick J Appendix 10 of the benchmark metadata shows reference conditions for Pct Fines by Ecoregion. But Colorado Plateau is not one of our hybrid ecoregions. So maybe we should pick a different ecoregion to mention here? chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://doimspp.sharepoint.com/sites/ext-blm-oc-naim/Shared%20Documents/Lotic/Analysis%20&%20Reporting/Analysis/Benchmarks/BenchmarkMetadata.pdf?cid=04c109c4-ffe0-4bcf-94ca-13deb4a7a85f[ML483R478] alternatively I think Nicole is referring to a figure from the NPRA report. Let me know if you don't have that report and I can send it[ML484R478] @Stropky, Michelle R are you familiar with this figure by chance? [YJ485R478] Not necessarily true. E.g., if your entire field office is cheatgrass invaded, reference condition won't provide you with any meaningful management options because it is not feasible to get a pathway back. [SM486] I think most of these steps could also apply to summarizing current conditions from AIM data, not just other data. Could we generalize this paragraph more? Perhaps call it "Current conditions from existing AIM and other data" ? and then adjust the text a bit to fit?[RV487] Yes! biggest consideration when using AIM data to assess AIM data is circular reasoning. Which we mention in the bullets below. [CN488R487] That works! That was the original intent of this section. Appendix 2 of TN453 was built off of this section.[KJ489R487] I think this should be our lead paragraph on ways to select benchmarks, because it provides a nice generalized way to look at data you already have. So, move it up to right before the "predicted natural conditions" (modeling) paragraph.[RV490] We were following order in the figure I believe[CN491R490] can we distribute information within this paragraph to other locations within this section? Are they more appropriate elsewhere in this section @Laurence-Traynor, Alexander C @Alexander, Patrick J @Miller, Janet L Ruth[YJ492R490] Not sure about moving this content to other sections. This is covered in appendix 2 of TN 453 if we want to reduce and cite.[ML493R490] I agree this section on using AIM data to set benchmarks should be kept with the rest of the info for setting benchmarks but Im confused why the whole setting benchmarks section is here in project planning and initiation. For me it should be part of creating monitoring objectives in step 2 of the MDW below, as we teach it in project leads. Citing appendix 2 of TN 453 would also be great.[LC494R490] As with others do we want to acknowledge here that they should make sure there is compatibility among field methods and that the results are applicable to the geographic area of interest[CN495] Figure above. Do we need to highlight this concept more? Might need to review TN453 and reread to see overlap for this whole benchmark section, but especially to ensure we’re giving PBJ the attention we should.[CN496] It might be just me but... “validate” could imply that BPJs overrule benchmarks. Another word choice might be “corroborate” (lend support to). [DJ497R496] I think we might actually mean that BPJ can overrule benchmarks.. If someone thinks a benchmark is just not applicable and has a justifiable and documented reason for that I think this is fine. The key to BPJ documentation and justification[CN498R496] Is this supposed to bulleted or moved left?[AN499] I think it’s suppose to be a bullet. [CN500R499] Should we also reference TN453?[CN501] [PLJ502]Benchmarks in A&R section[PLJ502] [PLJ503R502]Put this info in A&R section[PLJ503R502] @Laurence-Traynor, Alexander C review and delete if uneeded[LC504] [CN505]Doesn’t belong in the tool section… belongs in the steps. [CN505] [LTAC506R505]moved[LTAC506R505] [CN507]I don’t think this detail is needed here, the tools have detailed instructions and we’re also trying to update the tools so they might be slightly different in the near future… I say we focus on the concept of the tool- Apply benchmarks to determine conditions, track objectsives, and communicate with NOC if weighted analysis is needed… I think the above info does this enough Need description for this table. Also it's hard to read.[KA508] ADD to TOC[NM509][NM509] [KEJ510]I do not think this appendix is needed. We have many example repositories we can point folks to in the main body of the document. Plus, we have now replicated and updated most of this content on the blm.gov/aim/resources page.[KEJ510][KEJ510] [PLJ511]Still applicable? Keep? [NAM512R511]I think Turn into a separate document that we can reference that contextualizes the Master Sample and explains its history @Nafus, Aleta M, @Yokomizo, Erick J we need to discuss what to do with this section on master sample... is it another appendix or are we removing it and creating a separate doc like Aleta suggests above?[KA513R511] [DCJ514]R&W?[DCJ514] We don't have a master sample[RV515R514][RV515R514] Do we need this table? Is it still relevant and useful to understanding the master sample? Links for locations don't work. Need to ask Nicole.[KA516] [DCJ517]Add R&W? @Stropky, Michelle R @Reynolds, Lindsay V [YJ518R517] We don't have a master sample. Maybe someday when all the wetland mapping is done, or when there is a reliable remote sensed product we could use as a master sample. For here in the Desk Guide, it's fine for there to be no section on R&W master sample.[RV519R517] 116