Characterizing anthropogenic noise to improve understanding and management of impacts to wildlife

Diverse biological consequences of noise exposure are documented by an extensive literature. Unfortunately, the aggregate value of this literature is compromised by inconsistencies in noise measurements and incomplete descriptions of metrics. These studies commonly report the noise level (in decibels, dB) at which a response was measured. There are many methods to characterize noise levels in dB, which can result in different values depending on the processing steps used. It is crucial that methods used for noise level measurement be reported in sufficient detail to permit replication and maximize interpretation of results, enable comparisons across studies, and provide rigorous foundations for noise management in environmental conservation. Understanding the differences in the acoustic measurements is vital when making decisions about acceptable levels or thresholds for conservation strategies, particularly for endangered species where mistakes can have irreversible consequences. Here we provide a discussion on how different acoustic metrics are derived and recommendations on how to report sound level measurements. Examples of additional measures of noise besides level (e.g. spectral composition, duration) are discussed in the context of providing further insight on the consequences of noise and will potentially help develop effective mitigation. It will never be possible to study all combinations of sources and species. Standardized methods of noise measurement and reporting are necessary to advance syntheses and general models that predict the ecological consequences of noise.


INTRODUCTION
The global rise in anthropogenic noise has been driven by transport network expansion, urbanization, and greater demand for natural resources (Hildebrand 2009, Barber et al. 2010).An increasing body of research has demonstrated that anthropogenic noise can impact a diverse array of species in both aquatic (Nowacek et al. 2007, Southall et al. 2007, Slabbekoorn et al. 2010, Shannon et al. 2015) and terrestrial ecosystems (Brumm & Slabbekoorn 2005, Patricelli & Blickley 2006, Luther & Gentry 2013).Documented effects from many different noise sources span a variety of crucial biological activities, includ-ing foraging, mating, communication, and physiology (Shannon et al. 2015).
Research on the effects of noise has been conducted in many different scientific disciplines, including marine biology, physiology, ecology, animal behavior, conservation biology, and a variety of taxaspecific fields.While this highlights the widespread effects of noise and the importance of the topic for developing successful conservation plans, it also presents a challenge to effectively share and synthesize information across these scientific disciplines.Varied biological contexts and research paradigms combined with cursory familiarity with acoustics have resulted in a disparate body of literature in terms of measurements, metrics, summaries, and reporting.Misrepresenting or misinterpreting acoustic measurements can have consequences for understanding impacts and developing effective conservation strategies, particularly for endangered species where mistakes can have irreversible consequences.
Environmental noise studies require many analytical choices that must be informed by the acoustical and biological phenomena of interest.Sound is variation in pressure, and measurements of these variations start with choices about time intervals and spectral ranges.Sounds and auditory ranges span orders of magnitude in frequency and level, and logarithms are used to render sound levels into manageable quantities.A decibel (dB) is a logarithmic unit used to express the ratio of physical quantities proportional to power (e.g.sound pressure squared, particle velocity squared, sound intensity, sound-energy density, voltage squared).The decibel is used to express a level relative to a reference value, and the number of decibels is 10 times the logarithm to base 10 of the ratio.A number of data-processing pathways can result in a level expressed in decibels.Differences in the processing steps or instrument settings can result in different measurements of the noise, making it challenging to determine if variations among studies are actually due to differences in acoustic environments or are a product of the processing steps.
A few key studies have addressed particular issues when working with acoustic data.These studies provided important comparisons of acoustic metrics to describe acoustic stimuli (Madsen 2005, Ellison & Frankel 2012, Leighton 2012), identified sources of misinterpretations when comparing acoustic stimuli (Chapman & Ellis 1998), and recommended metrics, terminology, and techniques for assessing noise impacts on wildlife (Pater et al. 2009, Gill et al. 2015).The Acoustical Society of America (ASA) and other organizations have developed standards detailing how to conduct and analyze measurements of specific sources and defining appropriate terminology (e.g.ANSI 2014ANSI , 2015)).
Quantitative summaries of acoustic data to de scribe the level of noise exposure are diverse.In terrestrial acoustics, there is a history of methods and metrics derived from human perception of sound (e.g.perceived loudness, frequency weighting), which has advanced standardization of techniques and metrics (e.g.ANSI 2014ANSI , 2015) ) and the availability of commercialized technology.However, limiting acoustic metrics to human perception of sound hinders insight on the function of sound in a broader ecological context, especially given the diverse hearing ranges and sensitivities across the animal world.The history of underwater acoustics was driven largely by the military, resulting in advances in the theoretical and practical understanding of measuring and characterizing sound in aquatic environments.While there are some standards for underwater acoustics (e.g.ANSI 2009), there is a wide variety of custom metrics and technology (Sousa-Lima et al. 2013).This variety of terms, definitions, and metrics used in both terrestrial and underwater acoustics can be overwhelming for those new to the field.It can also result in confusion for experts working within specific areas of acoustic research, hindering translation across fields of study.
This paper provides a framework for reporting relevant information about acoustic measurements describing the level of sound to be able to understand differences between studies and work towards a broader synthesis of information on the effects of noise on wildlife.Details on measurement parameters, processing techniques, and reported metrics will aid in the interpretation, application, and synthesis of results across studies.We identify distinctions among multiple methods for characterizing noise levels.We also discuss characteristics that are intimately connected to noise level measurements -duration and spectral compositions -that can influence how an animal responds to noise.
Given the need to incorporate results from scientific studies on the effects of anthropogenic noise into management and conservation planning, a specific objective of this study was to provide a discussion on how different acoustic metrics are derived and how to assess the comparability of reported sound levels across studies.Our study is aimed at researchers and natural resource managers who may benefit from a deeper understanding of the principles, properties, and metrics associated with characterizing sound and have a need to synthesize existing information.Our study is not prescriptive but rather provides guidance for those reporting or interpreting acoustic metrics so an informed evaluation can be made.
Realization of sustainable resource management and conservation policy requires more than an accumulation of studies or information.Shannon et al. (2015) reported significant growth in publications documenting wildlife responses to noise and undesirable (and likely unintentional) variation in the measurement and reporting of noise exposure.Standardized use of metrics and more consistent reporting of measurements would substantially enhance the value of such studies for revealing repeated patterns of noise exposure or biological response, and relating those patterns to explanatory mechanisms.Thinking systematically about noise measurement is an essential prerequisite for thinking systematically about the ecological consequences of noise (e.g.Francis & Barber 2013).

CHARACTERIZING ANTHROPOGENIC NOISE
A common definition of noise is unwanted sound or sounds that interfere with a signal of interest.However, noise is not a purely subjective designation.Any sound that serves no function is noise.Most sounds produced by transportation and other ma chinery are unintended, serve no function, and are noise regardless of the attitudes of the listener.Examples of intentional sounds that are useful to the user but unwanted by some listeners include explosives, seismic exploration, sonars, alarms, and acous tic deterrent devices.Many unintended natural sounds, like the footfalls of a mouse or munching of coral by fish, may be noise from the perspective of the producer, yet potentially vital signals for re ceivers (Goerlitz et al. 2008, Stanley et al. 2010).Preserving opportunities to hear these natural sounds, in addition to management of specific noise sources, is an important component in manag-ing anthropogenic noise for the protection of wildlife species and ecosystem function.
Hereafter, noise will refer to sounds produced by anthropogenic sources.Noise is generated by a variety of sources both in air and underwater that vary in level and spectral composition.In many cases, e.g.urban environments and industrial settings, noise may be an aggregate of many individual sources.Many factors affecting sound propagation, including absorption and scattering, modify the characteristics of each noise as it propagates.For example, absorption in water and air varies markedly with frequency (Frosch 1964, Embleton 1996).There is no universal metric for measuring noise level and evaluating its impact.Characterizing noise involves a series of choices about the salient characteristics to be measured and how those characteristics relate to the effects on the receiver.

MEASURING THE LEVEL OF NOISE
The most common way to characterize the levels of noise is measuring the variation in pressure in a specified frequency range and time interval (Fig. 1).).These ratios are transformed using the base 10 logarithm, and then multiplied by 10. (1) where t 1 and t 2 represent the start and end times of the pressure measurements and t represents the continuous time variable.
The equivalent formula for a waveform that has been sampled digitally is: (2) where N is the number of samples over which pressure is measured.
The logarithmic transformation is useful for measuring quantities that span several orders of magnitude.When sound pressure is converted into decibels with the appropriate reference value, the measurement is called sound pressure level (SPL).SPL im plies a reference pressure of 20 µPa for atmospheric measurements and 1 µPa elsewhere unless otherwise specified (ANSI 2004).Therefore for underwater pressure measurements, 1 µPa is the reference pressure and is reported as dB SPL re: 1 µPa (Chapman & Ellis 1998).
In standard notation, Eq. ( 1) would be expressed as an equivalent continuous time-averaged sound level over a specified time interval (L eq,t 2-t 1 ) (ANSI 2004); this is a common metric used in terrestrial studies.In underwater measurements, the root-mean square (RMS) SPL is a more common notation.Changing the leading multiplicative constant to 20 and inserting a square root operation inside the square brackets of Eq. ( 1) yields an identical value to L eq , and recasts this expression as RMS SPL, converted to decibels.For both L eq and RMS, time interval is intrinsic to the measurement and must be supplied when reporting these metrics.Sound level meters (SLMs, instruments only available for terrestrial measurements) usually have 'slow' and 'fast' time averaging settings that correspond to exponential time constants of 1 s and 1/8 s, and are the standard time averaging for L eq measurements.
Although RMS and L eq are widely utilized to measure noise levels, there are different acoustic metrics that pertain to other contexts and noise sources.For impulsive noises (almost instantaneous sources like underwater explosions), measures of peak pressure excursions are of interest.Peak-to-peak amplitude (pk−pk) is the difference between the highest and lowest pressure deviations in a given time interval.Peak amplitude (pk) is the maximum absolute value of a pressure deviation in a given time interval.Depending on the signal, the RMS SPL, pk−pk, and pk values can vary by 2 to 12 dB (see Madsen 2005 for worked examples).
One other type of noise metric merits explanation, specifically sound exposure level (SEL or L E ).RMS and L eq measures of finite duration or transient sounds can be somewhat sensitive to the criteria used to delimit the duration of the sound.The amplitudes of most transient sounds taper from low to high levels and back again, so temporal limits that incorporate more of the total sound energy will have lower RMS sound levels.The SEL expresses the total sound energy in a transient sound as though that energy were de livered in a 1 s interval.This metric is relatively insensitive to the temporal limits used for the measurement, because the sound energy in the tails of the signal is often small compared to the energy in the peak of the signal.SEL also has advantages for impulsive noise, because its value can prove more repeatable than pk or pk−pk measurements.Algebraically, L E = L eq + 10log10(T ), where T is the duration of the measurement in seconds.For example, a noise level of 90 dB L eq lasting 2 s would have an SEL of 93 dB (note that this is true for both underwater and terrestrial sources).Some studies have summed the SEL over a given period of time to calculate a cumulative sound exposure level (SEL cum ).This provides an exposure under all conditions, stationary or mobile (Erbe & King 2009, Lepper et al. 2012).
There is always a lower bound to the frequency content of signals designated as sound, and this parameter is specific to the instrumentation used to collect acoustic measurements.The lower frequency bound of a system is commonly specified by the frequency at which the input sensitivity falls below a stated value.This is a crucial parameter, because atmospheric pressure variations on long time scales often dwarf acoustic pressure variations.For example, barometric pressure in the atmosphere is measured in kPa, with sea level extremes varying be tween 87 and 108 kPa.By comparison, a rather loud sound in air (94 dB SPL re: 20 µPa) corresponds to a 1 Pa sound level.Underwater acoustic environments also exhibit larger pressure fluctuations on longer time scales (surface waves, internal waves, tides).Accordingly, all noise measurements have a low frequency limit, and in many broadband noise measurements, this parameter will be critical for interpreting the results.Specifically, measurements collected with lowerfrequency-limit recording systems will likely report higher sound levels because the measurements will include energy in these lower frequencies.The low frequency limit is often set by the sensor or recording system as the frequency response of the system.
There is no comparable upper limit to the frequencies of signals that can be designated as sound, although every measurement system will have a high frequency limit.In many broadband, ambient sound level measurements, the resultant values will not be very sensitive to the high frequency limit, but the high frequency limit should be specified as the frequency res ponse of the sensor.All SPL measurements refer to a specific frequency band, so the term 'band sound pressure level' (BSPL) indicates that the reported measurement represents a subset of the broadband capacity of the instrument, discussed further below.

Time interval and duration
Time plays several roles in the calculation of acoustic metrics.It is an explicit part of RMS and L eq measurements, as described above and expressed in Eqs. ( 1) & (2).For L eq and RMS measurements and pk and pk−pk amplitude measurements, the time interval should be specified to indicate what portion of a recorded waveform is referenced.For L E and SEL measurements, sound pressure is squared and integrated over a stated period of time or event, relative to a reference sound pressure value.Again, time should be explicitly stated with an SEL measurement.In practice, SEL normalized to a 1 s period is useful when comparing noise levels and is commonly used and also referred to as the SEL, which can be confusing if the time interval is not clearly stated.
Digital acoustic processing introduces another significant time parameter: the sampling rate or sampling frequency.In a digital system, acoustic waveforms are captured as a series of pressure measurements taken at regular intervals, the inverse of the sampling frequency.For example, a sampling frequency of 44.1 kHz measures pressure every 22.7 µs.The sampling fre-quency specifies the nominal time resolution of the data and the upper limit of frequency content that can be represented without ambiguity.That upper frequency limit is called the Nyquist frequency; it is equal to half of the sampling frequency (e.g.Nyquist frequency of a 44.1 kHz sampling frequency is 22.05 kHz).Most instruments utilize low-pass filters to limit the signal energy falling above the Nyquist frequency that arrives at the digitizing module, although in principle a bandpass filter could be devised to capture any Nyquist frequency bandwidth of sound energy within the audio spectrum.
A number of short-term measurements are often calculated from a time series whose duration may span extended periods of time.For example, underwater noise studies often calculate an SPL (RMS) for 200 to 300 s and then summarize these measurements over months (McKenna et al. 2012a) or years (McDonald et al. 2006).A common practice is to summarize these measures using exceedance levels or percentiles (see Merchant et al. 2015).For example, L 90 is the level exceeded 90% of the time, which is the same as the 10 th percentile statistic.The maximum sound level (SPL max ) refers to the highest SPL value measured over the duration and is not the same as pk.Some studies take the mean and standard deviation of the SPLs in a specified time interval.

Frequency components
Another important role for time in sound measurements is through its inverse: frequency.Frequency can be measured as 1 over the interval required for a signal component to repeat itself, as the legacy units for frequency make clear (cycles s −1 ).Documenting how sound energy is distributed across frequency is crucial for understanding how the sound will propagate through the atmosphere or ocean and across terrain.Frequency content also affects how an animal hears and perceives that sound (Fay 1998).Hearing sensitivities are generally represented on an audiogram plot, where the sensitivity (or the level a sound needs to be before an animal perceives it) is plotted across different frequencies.The terms infrasound and ultrasound are of dubious value for acoustical measurements and ecological studies: they reference human hearing capabilities and they introduce categorical distinctions that misrepresent the graded declines in hearing sensitivity at both ends of the human auditory spectrum.
Quantifying the noise level across frequencies requires a transformation from the time domain (pres-sure per unit time) to the frequency domain (pressure per unit frequency).In digital acoustic analysis, the fast Fourier Transform (FFT) provides a computationally efficient method of transforming data from the time domain to the frequency by generating spectral coefficients, known as Fourier coefficients (Welch 1967).Many algorithms and implementations are available (e.g.Dunamel & Vetterli 1990).Reviewing all details of FFT processing is beyond the scope of this paper; however, there are a few notable properties for sound level measurements.These include the relationship of the Fourier coefficients to RMS band levels, time and frequency resolutions of RMS band levels, and the windowing properties.
First, the connection between RMS SPL and digital Fourier coefficients is exposed by Parseval's theorem: (3) When the input data (x t ) are pressure deviations with a mean of 0, this theorem states that the input signal energy is equal to the sum of the squared Fourier coefficients (X j ).In Eq. ( 3), X j represents the j th complex digital Fourier coefficient, and the squaring operator produces a real value equal to the squared magnitude of the coefficient; therefore, the usual practice of taking the magnitude (or absolute value) of all FFT coefficients results in a sequence of RMS band level measurements.This sequence of RMS band level measurements is called a spectrum for real signals, and the complex coefficients below the Nyquist frequency must be doubled prior to converting to decibels.
The properties of the FFT determine the time and frequency resolution of the RMS band level measurements.Awareness (and disclosure) of the time and frequency resolution of your processed acoustic data is important, particularly when describing differences between acoustic signals, characterizing and comparing noise sources, and displaying results in a spectrogram.The first complex Fourier coefficient corresponds to a fundamental frequency that completes exactly 1 cycle in the data sequence processed by the FFT.Therefore, the frequency corresponding to this coefficient is 1 over the duration of the data sequence (or the length of the data sequence divided by the sampling frequency).Subsequent Fourier coefficients correspond to multiples of this frequency.The nominal width of each FFT frequency band is equal to the inverse of the duration of the sound data processed by the FFT.The duration of the FFT data sample represents a measure of the time resolution of this analysis.Therefore, the product of the time resolution and the frequency resolution is 1.0; improving one form of resolution is achieved at the cost of diminishing the other.This type of relationship might be called the Uncertainty Principle for the FFT.It is possible to improve on this tradeoff by a factor of 2, with more sophisticated use of the FFT (Cohen 1995).
This relationship between time and frequency may best be explained through examples (see Table 1).Suppose a recording was made at a sampling rate of 2 kHz (2000 samples s −1 ) and that we thought we would measure frequency and temporal characteristics of blue whale calls in this recording at a resolution of 1 Hz and 0.1 s (Example 1, Table 1).The product of these values is 0.1; we know that achieving the desired resolution is impossible.To achieve a 1 Hz bandwidth (Example 1, Table 1), we need 1 s of data, or 2000 samples.This exceeds our time resolution target of 0.1 s.To change the time resolution, we might work with 0.1 s of data, or 200 samples (Example 2, Table 1).Now the frequency resolution is 1 over the FFT data duration, 1/0.1, or 10 Hz.Spectrograms and other analyses often overlap data in successive FFT calculations, such that the tail of one data sample becomes the head of the other.This can improve the apparent time resolution of a spectrogram (Example 1, Table 1), but some of this improvement is illusory; the shared data in successive FFT calculations means these measurements are not independent.Nonetheless, to achieve the desired reso-  1).
To calculate percent overlap, refer to Table 1.In Example 1, 90% overlap in samples is necessary for a time resolution of 0.1 s and frequency resolution of 1 Hz.Another method for improving the apparent time-frequency resolution of FFT analyses is padding a tapered segment of data with zeroes.This increases the apparent length of the data sequence (improving frequency resolution).The resulting spectrum will appear smoother, as this generates a form of interpolation, but there has been no increase in the number of independent spectral measurements.
An additional factor that can affect FFT analyses of sound levels is how the data presented to the FFT are tapered or windowed.The discontinuity between the last sample and first samples presented to the FFT can affect all of the FFT coefficients, known as 'spectral leakage.' Windowing reduces or eliminates this effect, at the cost of reducing the effective frequency resolution of the output.As an aside, more accurate spectral estimates have been realized through the 'multitaper' methods introduced by Thomson (1982).The windowing method should be reported in the description of the acoustic analysis (Fig. 1).

Frequency weighting
The practice of frequency weighting arose in the context of human community noise studies, where the objective was to generate broadband sound level measurements that incorporated what was known about the changes in human hearing sensitivity across the audible spectrum.Frequency weighting is an algorithm of frequency-dependent attenuation that simulates the hearing sensitivity of the study subjects.Applying a frequency weight to measured sound levels provides a way to discriminate what sound is heard by the subjects and generate a broadband metric.Aweighting was designed to adjust each frequency band level measurement downwards by an amount proportional to a human 'equal loudness' curve for relatively low level sounds (i.e. the loudness of a 40 dB SPL tone at 1 kHz; see Fletcher & Munson 1933, IEC 2013).Despite the original focus on relatively low perceived loudness levels, A-weighting has become the standard for measuring most noise impacts to humans, and most SLMs are equipped with an A-weight filter.C-weighting takes into account the flatter response associated with human hearing at higher SPLs (e.g.>100 dB) and is normally used for peak measurements when sounds are likely to be of high intensity.Z-weighting means the absence of any weighting, denoting a flat frequency response between 10 Hz and 20 kHz.Given the capacity of humans, and many other animals, to parse their audible spectra into independently perceived components, all weighted, broadband sound level summaries will be most meaningful when all of the sounds in question have very similar spectral distributions.Otherwise, the value of human-weighted broadband measurements for wildlife studies is not clear.
If the hearing thresholds for a species of interest are known, a frequency weighting function can be developed to adjust sound levels based on specific hearing sensitivities.Audiograms describe the hearing range and sensitivity of a species and provide information for developing frequency weighting functions for specific species.Behavioral psycho physics, evoked potential audiometry, and auditory morphology can all inform the hearing range and sensitivity.Extensive work, both in the field and in the wild, has been done on marine mammal hearing to develop frequency weighting functions and threshold levels to assess the effects of anthropo genic noise.Currently, 5 functional hearing groups and associated auditory weighting functions exist for marine mammals and are used to define acoustic threshold levels (Southall et al. 2007, NOAA 2015).For terrestrial species, an owl-weighted function has been developed based on the audiogram of the hearing range and sensitivity of 2 owl species (Delaney et al. 1999).

Calibrated SPLs
Calibrated sound levels are necessary to draw meaningful comparisons over time and at different locations and across different studies.Most acoustic sensors (hydrophone or microphone) measure a change in voltage, with a direct relationship between the voltages generated per unit of sound pressure.Typically, gain is applied to a voltage signal using a pre-amplifier.How the signal is digitized sets the amplitude range (e.g.16-bit recording has a peak to peak range of ± 2 16 ).All of these factors are specific to an acoustic data acquisition system, and therefore, the calibration process is unique to each system and recording parameters.
An 'end-to-end' calibration, accounting for the effects of each transformation applied to a signal, is one method for calibrating acoustic data (Merchant et al. 2015) and can be used for both underwater and terrestrial recording systems.A system can also be calibrated by playing a signal with a known SPL at a specific frequency to a recording system and using this to then adjust the values to the correct levels; this is a common method for underwater instrumentation, and can also be used for terrestrial systems.Another method is comparative calibration, using simultaneous recordings of an un-calibrated system with a calibrated system, like an SLM (Mennitt & Fristrup 2012).This type of calibration is more typical for terrestrial recording systems given that SLMs are only for in-air measurements.Most SLMs output calibrated SPLs with the signal processing and calibration happening internally.For these systems, maintaining the recommended calibration schedule is necessary, and noting the make and model of the instrument is adequate for reporting.Regardless of the calibration method used, it is important to report the technique, the level of accuracy, and how the accuracy varies by frequency.

TOWARDS A STANDARDIZED REPORTING OF SPL
There are many processing pathways to a single sound level measure expressed as dB SPL, as detailed in the previous sections and summarized in Fig. 1.Decisions about the specific analytical pathway are driven by the question of interest, the noise source, or the characteristics of the receiver.It is therefore not appropriate to create a standardized metric for all studies; instead, we offer guidance for standardized reporting of the relevant details for acoustic measurements.
For some sources the measurement and processing steps are set by specific international or national standards, for example, underwater ship noise measurement (ANSI 2009, ISO 2012), wind turbine noise measurement (IEC 2002), and railway noise (ISO 2013).In other cases, the decisions about the processing steps are determined by the individual researcher on how best to characterize the noise level for the biological response being measured.For example, measurement of noise likely will differ if one is measuring physical injury, effects on hearing (sensory degradation), or behavioral or community effects.Regardless of the motivation, in order to maxi mize interpretability of a noise level and ensure comparability with noise levels across studies, full descriptions of how the SPL was measured, the time interval of the measure, duration of the measure, and how the metric was summarized are essential.Fig. 1 provides guidance on how the processing steps and reporting details are related.
To explore potential differences in reported acoustic metrics, we drew from examples in the peerreviewed literature and asked the question whether the reported dB values are directly comparable (Table 2).In other words, is it possible to determine if differences in reported noise levels are real because acoustic metrics are comparable or could the differences also be related to different acoustic metrics or processing steps?If the latter is true, caution should be used when comparing results across studies, as it is unknown if differences are related to the metrics used or to the actual noise.Further, if it is unclear how metrics are derived, setting conservation standards based on the results would be problematic.There would be no way to evaluate whether the desired noise levels are being met.The following comparisons are simply meant to illustrate if reported noise levels from different studies are comparable and do not discount the methods and results of the individual studies.
Two studies measuring background noise in the marine environment were selected to illustrate how differences in reported sound levels are likely related to the acoustic measurements made (Table 2: Miksis-Olds & Wagner 2011, Parks et al. 2011).Both of these studies examined how marine mammal species, specifically West Indian manatees Trichechus manatus and endangered North Atlantic right whales Eubalaena glacialis (Reilly et al. 2012), responded to increased background noise in order to inform conservation efforts.The reported dB values when a biological response was measured differed by 45 dB between these studies (Table 2).While acoustic environments can vary by this much, there was a major difference between the acoustic measurements, specifically the frequency band of each measurement; the study with higher levels (Parks et al. 2011) used a broadband measurement that included lower frequencies (20−8000 Hz), while the study reporting lower levels (Miksis-Olds & Wagner 2011) used a narrow mid-frequency band (3563− 4489 Hz), a measurement that excluded lowerfrequency noise.This difference in methodology likely explains part of the large difference in re ported sound level values.Because different frequency bands were used, it is not accurate to directly compare the dB levels between these 2 studies.Excluding the low frequency sound energy from a measurement will result in lower reported noise levels be cause ship noise and distance sources are not included.Other important differences were how the measured SPLs were summarized; Miksis-Olds & Wagner took an average of the 5 s SPLs and Parks et al. used the lowest 10 s value re corded.Both studies reported details on how acous tic metrics were derived, and it was fairly straightforward to evaluate how comparable the re sults are and what factors, in addition to the noise source, might be causing the differences in reported sound levels.
Two studies measuring noise conditions in the terrestrial environment (Mendes et al. 2011, Shieh et al. 2012) were also selected to illustrate how to compare reported sound levels and evaluate comparability.These studies examined how vocalizations change in response to increased noise for common blackbirds (Mendes et al. 2011) and cicadas (Shieh et al. 2012).Like the marine examples, experiments were conducted in natural settings.Both studies reported an average of measured SPLs using sound level meters (Table 2); the reported values differed by 18 dB.While the difference may be real, it is first important to determine whether the values are comparable based on the reported acoustic metrics.The most notable difference is that the study with higher reported levels used C-weighting compared to A-weighting in the other study; this choice in frequency weighting likely accounts for the difference.Most acoustic en-ergy in urban settings is in the lower frequencies; therefore, applying an A-weighting filter would reduce the measured noise levels by 20 dB at 100 Hz, whereas the C-weighting filter would not reduce measured noise levels at 100 Hz.Other differences between the studies were that some of the settings on the commercially available SLM either differed or were not reported (e.g.amplitude metric, time interval).Because of the different frequency weighting between these studies, it is not accurate to directly compare the dB levels between these 2 studies.
From a resource management perspective, knowing the differences in the acoustic measurements is vital when making decisions about acceptable levels or thresholds for conservation strategies, particularly for endangered species where mistakes can have significant effects on the species.In the marine example, setting a noise level threshold for protecting marine mammals from noise that is based on the level and metrics reported for the endangered West Indian manatee (see

ADDITIONAL CHARACTERISTICS OF NOISE RELEVANT TO BIOLOGICAL RESPONSE
Noise presents a novel acoustic stimulus that alters the acoustic conditions of a habitat.The level, duration, and spectral composition of a noise source all convey information about the noise (e.g.loudness, distance, motion).For example, the presence of harmonics and high-frequency components offer cues for proximity and relative motion.A biological response to noise likely relates to all of these characteristics of the noise relative to the background acoustic environment and to similarities of the noise to sounds of interest.
Previous studies have called for a broader characterization of noise to better understand its effects on wildlife (Pater et al. 2009, Francis & Barber 2013, Gill et al. 2015).This section expands on these ideas and relates particular noise characteristics and metrics to behavioral responses or perceptions of the noise.While the auditory capabilities, communication ranges, and behavioral state of an animal exposed to a noise can influence its response, we focus our discussion on characteristics of the noise rather than the condition of the animal.The intent is to stimulate broader thinking on what are the characteristics of noise that might drive or mediate a response, discuss how best to quantify these characteristics of noise, and ultimately incorporate the findings into the management of noise for wildlife and ecosystems.
The presence of noise can reduce an animal's ability to detect and therefore respond to important acoustic cues in its environment.All sources of noise can cause acoustic sensory degradation, although sources categorized as chronic (i.e.those that continue for long duration or occur frequently) likely have a greater effect.A simple noise level metric does not fully capture the effects of these chronic noise sources.Reductions in listening area and loss of communication space have been proposed as additional methods to quantify the effects of chronic noise sources on sensory systems (Clark et al. 2009, Barber et al. 2010, Hatch et al. 2012).The spatial and temporal co-occurrence of noise with biologically relevant acoustic cues is a key consideration, in addition to the spectral similarities (Francis & Barber 2013).Another characteristic of noise sources that may influence a biological response to a degraded acoustic environment is the duration and occurrence of noise-free intervals.Short and infrequent intervals without noise may result in a change of behavior, such as leaving a habitat (Sarà et al. 2007) or changing vocal communication by calling more often (Di Iorio & Clark 2010), whereas long or numerous noise-free intervals may result in animals remaining in the habitat but waiting for these opportunities to listen or call (Fuller et al. 2007).
Acute or transient noise sources are predicted to elicit a response similar to a predatory threat (e.g.flee, startle response; Francis & Barber 2013).The level of the noise received may provide an animal with some indication of distance to the threat and therefore mediate a response.In addition, a response is likely linked to how similar the noise, in terms of spectral content, is to a predator signal or warning call from a conspecific (Tyack et al. 2011).A comparison of the spectral content of the noise compared to biological acoustic cues can reveal similarities in the structure, such as frequency and presence of harmonics.Further, the duration and occurrence of the noise compared to a predator call are other important characteristics that may influence how an animal responds.In addition to these acoustic characteristics of the noise source, the behavioral state of the animal (Ellison et al. 2012, Goldbogen et al. 2013) and prior exposure to the noise may result in differential responses.
Noise that induces physiological responses, such as hearing threshold shifts, hearing loss, or hearing damage, may result from sources with high noise levels (McCauley et al. 2003).For these sources, it is important to measure the maximum levels reached during a given event (e.g.SPL max ).The duration of the noise, rate at which power rises from detectable to maximum levels, and the occurrence of the noise are additional characteristics that affect hearing damage.For example, even if levels are lower, longer exposure can result in hearing threshold shifts or hearing loss (Smith et al. 2004, Finneran 2015); hence the cumulative SEL is important as a metric that includes information on the duration (Ellison et al. 2016, Fleishman et al. 2016, Hawkins & Popper 2016).Another known physiological response to noise is increased stress hormone levels when exposed to chronic noise (Blickley et al. 2012, Rolland et al. 2012).For these sources, the duration of the exposure and noise-free intervals are likely important acoustic characteristics that influence the stress response and recovery from noise.
The unintended sounds made by animals (e.g.footfalls, munching of coral by fish) reveal their presence and location to potential predators and can vary by substrate type or activity of the prey (Goerlitz et al. 2008, Stanley et al. 2010).For some species, the presence of noise provides an opportunity to exploit a habitat free from predators, either because the predator species avoids noisy areas or predators are unable to detect unintended sounds from prey.The result of this can have cascading effects on ecosystem structure and function (Francis et al. 2012).Characterizing these unintended sounds is an important element not only for understanding the ecology of prey detection but also for predicting when noise conditions reduce the effectiveness of foraging (Siemers & Schaub 2011).The levels and spectral content of these sounds relative to noise conditions and hearing capabilities of predators remain largely unexplored in the scientific literature, yet preserving opportunities to hear these sounds might be vital to species and ecosystems.
The noise level received by the animal relative to background noise or the variability of noise in an animal's habitat may provide further insight into biological responses.Hearing sensitivities of some species are at least in part adapted to the ambient acoustic conditions of their habitat (Amoser & Ladich 2005).Reporting noise levels relative to the variability in background levels requires longer-term recordings that capture the variability (Lynch et al. 2011).Methods for measuring acoustic habitats from passive acoustic recordings such as statistical summaries indicating the percentage of time above a certain level have been described in other studies (see Merchant et al. 2015).One prediction might be that habitats with lower variability in natural sound levels may contain species that are more sensitive to noise and therefore respond at lower exposure levels.These animals may not have evolved traits that allow them to adapt to situations where noise levels are elevated.Furthermore, characterizing the acoustic habitat be fore and after exposure to noise offers greater understanding of the acoustic conditions associated with recovery.

BEYOND CHARACTERIZING NOISE
The use of acoustics can enhance our understan ding of animal behavior and ecological processes.Advances in technology, novel analytical methods, and partnerships with other disciplines (e.g.engineering and computer science) have allowed more widespread use of acoustic data in biological studies (Blumstein et al. 2011).New insights into fundamental ecological questions have been possible through the use of passive acoustics, such as determining animal abundance and density, particularly among elusive and rare or endangered species (Marques et al. 2013), biological diversity in certain habitats (Depraetere et al. 2012), timing of biological events (Buxton et al. 2016), and animal behavior, both vocal (Luther & Gentry 2013) and non-vocal (Johnson & Tyack 2003, Lynch et al. 2013).It is crucial that scientists working in these disciplines are also mindful of the different methods for characterizing and reporting acoustic data.The methods and reporting details reviewed in this paper may also be useful to researchers working in these fields.

CONCLUSIONS
As the natural world becomes noisier, there is an urgent need for greater synthesis of existing data in order to develop effective methods for conserving natural acoustic environments and the species that depend on them.Here we provide relevant details on how noise is measured and a standardized approach to reporting information on acoustic metrics (see Fig. 1: 'What to report').The goal is to help guide this diverse field of research, so information is accessible, rigorous, and comparable across studies and disciplines.Further, we hope the discussion is valuable to natural resource managers charged with interpreting and using existing scientific evidence to make informed decisions or set conservation goals.Evidencebased decision making relies equally on the interpreter's ability to extract relevant information and on how the information is presented; our paper attempts to address both of these issues.A second goal of this paper is to stimulate broader thinking about how best to characterize noise from the perspective of the species or habitat of concern.Incorporating these additional measures may provide insight and predictive power regarding the consequences of noise and ultimately protect species and ecosystems through effective conservation actions.

Table 2 :
Miksis-Olds & Wagner 2011)would limit a manager's ability to detect changes in noise below 3 kHz, i.e. frequencies known to be ecologically important to many marine

Table 2 .
Comparison of studies measuring noise levels to evaluate a biological response.SPL: sound pressure level; RMS: root-mean square, L Aeq : A-weighted equivalent continuous time-averaged sound level