tracing_subscriber/filter/layer_filters/mod.rs
1//! ## Per-Layer Filtering
2//!
3//! Per-layer filters permit individual `Layer`s to have their own filter
4//! configurations without interfering with other `Layer`s.
5//!
6//! This module is not public; the public APIs defined in this module are
7//! re-exported in the top-level `filter` module. Therefore, this documentation
8//! primarily concerns the internal implementation details. For the user-facing
9//! public API documentation, see the individual public types in this module, as
10//! well as the, see the `Layer` trait documentation's [per-layer filtering
11//! section]][1].
12//!
13//! ## How does per-layer filtering work?
14//!
15//! As described in the API documentation, the [`Filter`] trait defines a
16//! filtering strategy for a per-layer filter. We expect there will be a variety
17//! of implementations of [`Filter`], both in `tracing-subscriber` and in user
18//! code.
19//!
20//! To actually *use* a [`Filter`] implementation, it is combined with a
21//! [`Layer`] by the [`Filtered`] struct defined in this module. [`Filtered`]
22//! implements [`Layer`] by calling into the wrapped [`Layer`], or not, based on
23//! the filtering strategy. While there will be a variety of types that implement
24//! [`Filter`], all actual *uses* of per-layer filtering will occur through the
25//! [`Filtered`] struct. Therefore, most of the implementation details live
26//! there.
27//!
28//! [1]: crate::layer#per-layer-filtering
29//! [`Filter`]: crate::layer::Filter
30use crate::{
31 filter::LevelFilter,
32 layer::{self, Context, Layer},
33 registry,
34};
35use std::{
36 any::TypeId,
37 cell::{Cell, RefCell},
38 fmt,
39 marker::PhantomData,
40 ops::Deref,
41 sync::Arc,
42 thread_local,
43};
44use tracing_core::{
45 span,
46 subscriber::{Interest, Subscriber},
47 Dispatch, Event, Metadata,
48};
49pub mod combinator;
50
51/// A [`Layer`] that wraps an inner [`Layer`] and adds a [`Filter`] which
52/// controls what spans and events are enabled for that layer.
53///
54/// This is returned by the [`Layer::with_filter`] method. See the
55/// [documentation on per-layer filtering][plf] for details.
56///
57/// [`Filter`]: crate::layer::Filter
58/// [plf]: crate::layer#per-layer-filtering
59#[cfg_attr(docsrs, doc(cfg(feature = "registry")))]
60#[derive(Clone)]
61pub struct Filtered<L, F, S> {
62 filter: F,
63 layer: L,
64 id: MagicPlfDowncastMarker,
65 _s: PhantomData<fn(S)>,
66}
67
68/// Uniquely identifies an individual [`Filter`] instance in the context of
69/// a [`Subscriber`].
70///
71/// When adding a [`Filtered`] [`Layer`] to a [`Subscriber`], the [`Subscriber`]
72/// generates a `FilterId` for that [`Filtered`] layer. The [`Filtered`] layer
73/// will then use the generated ID to query whether a particular span was
74/// previously enabled by that layer's [`Filter`].
75///
76/// **Note**: Currently, the [`Registry`] type provided by this crate is the
77/// **only** [`Subscriber`] implementation capable of participating in per-layer
78/// filtering. Therefore, the `FilterId` type cannot currently be constructed by
79/// code outside of `tracing-subscriber`. In the future, new APIs will be added to `tracing-subscriber` to
80/// allow non-Registry [`Subscriber`]s to also participate in per-layer
81/// filtering. When those APIs are added, subscribers will be responsible
82/// for generating and assigning `FilterId`s.
83///
84/// [`Filter`]: crate::layer::Filter
85/// [`Subscriber`]: tracing_core::Subscriber
86/// [`Layer`]: crate::layer::Layer
87/// [`Registry`]: crate::registry::Registry
88#[cfg(feature = "registry")]
89#[cfg_attr(docsrs, doc(cfg(feature = "registry")))]
90#[derive(Copy, Clone)]
91pub struct FilterId(u64);
92
93/// A bitmap tracking which [`FilterId`]s have enabled a given span or
94/// event.
95///
96/// This is currently a private type that's used exclusively by the
97/// [`Registry`]. However, in the future, this may become a public API, in order
98/// to allow user subscribers to host [`Filter`]s.
99///
100/// [`Registry`]: crate::Registry
101/// [`Filter`]: crate::layer::Filter
102#[derive(Copy, Clone, Eq, PartialEq)]
103pub(crate) struct FilterMap {
104 bits: u64,
105}
106
107impl FilterMap {
108 pub(crate) const fn new() -> Self {
109 Self { bits: 0 }
110 }
111}
112
113/// The current state of `enabled` calls to per-subscriber filters on this
114/// thread.
115///
116/// When `Filtered::enabled` is called, the filter will set the bit
117/// corresponding to its ID if the filter will disable the event/span being
118/// filtered. When the event or span is recorded, the per-layer filter will
119/// check its bit to determine if it disabled that event or span, and skip
120/// forwarding the event or span to the inner layer if the bit is set. Once
121/// a span or event has been skipped by a per-layer filter, it unsets its
122/// bit, so that the `FilterMap` has been cleared for the next set of
123/// `enabled` calls.
124///
125/// FilterState is also read by the `Registry`, for two reasons:
126///
127/// 1. When filtering a span, the Registry must store the `FilterMap`
128/// generated by `Filtered::enabled` calls for that span as part of the
129/// span's per-span data. This allows `Filtered` layers to determine
130/// whether they had previously disabled a given span, and avoid showing it
131/// to the wrapped layer if it was disabled.
132///
133/// This allows `Filtered` layers to also filter out the spans they
134/// disable from span traversals (such as iterating over parents, etc).
135/// 2. If all the bits are set, then every per-layer filter has decided it
136/// doesn't want to enable that span or event. In that case, the
137/// `Registry`'s `enabled` method will return `false`, so that
138/// recording a span or event can be skipped entirely.
139#[derive(Debug)]
140pub(crate) struct FilterState {
141 enabled: Cell<FilterMap>,
142 // TODO(eliza): `Interest`s should _probably_ be `Copy`. The only reason
143 // they're not is our Obsessive Commitment to Forwards-Compatibility. If
144 // this changes in tracing-core`, we can make this a `Cell` rather than
145 // `RefCell`...
146 interest: RefCell<Option<Interest>>,
147
148 #[cfg(debug_assertions)]
149 counters: DebugCounters,
150}
151
152/// Extra counters added to `FilterState` used only to make debug assertions.
153#[cfg(debug_assertions)]
154#[derive(Debug)]
155struct DebugCounters {
156 /// How many per-layer filters have participated in the current `enabled`
157 /// call?
158 in_filter_pass: Cell<usize>,
159
160 /// How many per-layer filters have participated in the current `register_callsite`
161 /// call?
162 in_interest_pass: Cell<usize>,
163}
164
165#[cfg(debug_assertions)]
166impl DebugCounters {
167 const fn new() -> Self {
168 Self {
169 in_filter_pass: Cell::new(0),
170 in_interest_pass: Cell::new(0),
171 }
172 }
173}
174
175thread_local! {
176 pub(crate) static FILTERING: FilterState = const { FilterState::new() };
177}
178
179/// Extension trait adding [combinators] for combining [`Filter`].
180///
181/// [combinators]: crate::filter::combinator
182/// [`Filter`]: crate::layer::Filter
183pub trait FilterExt<S>: layer::Filter<S> {
184 /// Combines this [`Filter`] with another [`Filter`] s so that spans and
185 /// events are enabled if and only if *both* filters return `true`.
186 ///
187 /// # Examples
188 ///
189 /// Enabling spans or events if they have both a particular target *and* are
190 /// above a certain level:
191 ///
192 /// ```
193 /// use tracing_subscriber::{
194 /// filter::{filter_fn, LevelFilter, FilterExt},
195 /// prelude::*,
196 /// };
197 ///
198 /// // Enables spans and events with targets starting with `interesting_target`:
199 /// let target_filter = filter_fn(|meta| {
200 /// meta.target().starts_with("interesting_target")
201 /// });
202 ///
203 /// // Enables spans and events with levels `INFO` and below:
204 /// let level_filter = LevelFilter::INFO;
205 ///
206 /// // Combine the two filters together, returning a filter that only enables
207 /// // spans and events that *both* filters will enable:
208 /// let filter = target_filter.and(level_filter);
209 ///
210 /// tracing_subscriber::registry()
211 /// .with(tracing_subscriber::fmt::layer().with_filter(filter))
212 /// .init();
213 ///
214 /// // This event will *not* be enabled:
215 /// tracing::info!("an event with an uninteresting target");
216 ///
217 /// // This event *will* be enabled:
218 /// tracing::info!(target: "interesting_target", "a very interesting event");
219 ///
220 /// // This event will *not* be enabled:
221 /// tracing::debug!(target: "interesting_target", "interesting debug event...");
222 /// ```
223 ///
224 /// [`Filter`]: crate::layer::Filter
225 fn and<B>(self, other: B) -> combinator::And<Self, B, S>
226 where
227 Self: Sized,
228 B: layer::Filter<S>,
229 {
230 combinator::And::new(self, other)
231 }
232
233 /// Combines two [`Filter`]s so that spans and events are enabled if *either* filter
234 /// returns `true`.
235 ///
236 /// # Examples
237 ///
238 /// Enabling spans and events at the `INFO` level and above, and all spans
239 /// and events with a particular target:
240 /// ```
241 /// use tracing_subscriber::{
242 /// filter::{filter_fn, LevelFilter, FilterExt},
243 /// prelude::*,
244 /// };
245 ///
246 /// // Enables spans and events with targets starting with `interesting_target`:
247 /// let target_filter = filter_fn(|meta| {
248 /// meta.target().starts_with("interesting_target")
249 /// });
250 ///
251 /// // Enables spans and events with levels `INFO` and below:
252 /// let level_filter = LevelFilter::INFO;
253 ///
254 /// // Combine the two filters together so that a span or event is enabled
255 /// // if it is at INFO or lower, or if it has a target starting with
256 /// // `interesting_target`.
257 /// let filter = level_filter.or(target_filter);
258 ///
259 /// tracing_subscriber::registry()
260 /// .with(tracing_subscriber::fmt::layer().with_filter(filter))
261 /// .init();
262 ///
263 /// // This event will *not* be enabled:
264 /// tracing::debug!("an uninteresting event");
265 ///
266 /// // This event *will* be enabled:
267 /// tracing::info!("an uninteresting INFO event");
268 ///
269 /// // This event *will* be enabled:
270 /// tracing::info!(target: "interesting_target", "a very interesting event");
271 ///
272 /// // This event *will* be enabled:
273 /// tracing::debug!(target: "interesting_target", "interesting debug event...");
274 /// ```
275 ///
276 /// Enabling a higher level for a particular target by using `or` in
277 /// conjunction with the [`and`] combinator:
278 ///
279 /// ```
280 /// use tracing_subscriber::{
281 /// filter::{filter_fn, LevelFilter, FilterExt},
282 /// prelude::*,
283 /// };
284 ///
285 /// // This filter will enable spans and events with targets beginning with
286 /// // `my_crate`:
287 /// let my_crate = filter_fn(|meta| {
288 /// meta.target().starts_with("my_crate")
289 /// });
290 ///
291 /// let filter = my_crate
292 /// // Combine the `my_crate` filter with a `LevelFilter` to produce a
293 /// // filter that will enable the `INFO` level and lower for spans and
294 /// // events with `my_crate` targets:
295 /// .and(LevelFilter::INFO)
296 /// // If a span or event *doesn't* have a target beginning with
297 /// // `my_crate`, enable it if it has the `WARN` level or lower:
298 /// .or(LevelFilter::WARN);
299 ///
300 /// tracing_subscriber::registry()
301 /// .with(tracing_subscriber::fmt::layer().with_filter(filter))
302 /// .init();
303 /// ```
304 ///
305 /// [`Filter`]: crate::layer::Filter
306 /// [`and`]: FilterExt::and
307 fn or<B>(self, other: B) -> combinator::Or<Self, B, S>
308 where
309 Self: Sized,
310 B: layer::Filter<S>,
311 {
312 combinator::Or::new(self, other)
313 }
314
315 /// Inverts `self`, returning a filter that enables spans and events only if
316 /// `self` would *not* enable them.
317 ///
318 /// This inverts the values returned by the [`enabled`] and [`callsite_enabled`]
319 /// methods on the wrapped filter; it does *not* invert [`event_enabled`], as
320 /// filters which do not implement filtering on event field values will return
321 /// the default `true` even for events that their [`enabled`] method disables.
322 ///
323 /// Consider a normal filter defined as:
324 ///
325 /// ```ignore (pseudo-code)
326 /// // for spans
327 /// match callsite_enabled() {
328 /// ALWAYS => on_span(),
329 /// SOMETIMES => if enabled() { on_span() },
330 /// NEVER => (),
331 /// }
332 /// // for events
333 /// match callsite_enabled() {
334 /// ALWAYS => on_event(),
335 /// SOMETIMES => if enabled() && event_enabled() { on_event() },
336 /// NEVER => (),
337 /// }
338 /// ```
339 ///
340 /// and an inverted filter defined as:
341 ///
342 /// ```ignore (pseudo-code)
343 /// // for spans
344 /// match callsite_enabled() {
345 /// ALWAYS => (),
346 /// SOMETIMES => if !enabled() { on_span() },
347 /// NEVER => on_span(),
348 /// }
349 /// // for events
350 /// match callsite_enabled() {
351 /// ALWAYS => (),
352 /// SOMETIMES => if !enabled() { on_event() },
353 /// NEVER => on_event(),
354 /// }
355 /// ```
356 ///
357 /// A proper inversion would do `!(enabled() && event_enabled())` (or
358 /// `!enabled() || !event_enabled()`), but because of the implicit `&&`
359 /// relation between `enabled` and `event_enabled`, it is difficult to
360 /// short circuit and not call the wrapped `event_enabled`.
361 ///
362 /// A combinator which remembers the result of `enabled` in order to call
363 /// `event_enabled` only when `enabled() == true` is possible, but requires
364 /// additional thread-local mutable state to support a very niche use case.
365 //
366 // Also, it'd mean the wrapped layer's `enabled()` always gets called and
367 // globally applied to events where it doesn't today, since we can't know
368 // what `event_enabled` will say until we have the event to call it with.
369 ///
370 /// [`Filter`]: crate::layer::Filter
371 /// [`enabled`]: crate::layer::Filter::enabled
372 /// [`event_enabled`]: crate::layer::Filter::event_enabled
373 /// [`callsite_enabled`]: crate::layer::Filter::callsite_enabled
374 fn not(self) -> combinator::Not<Self, S>
375 where
376 Self: Sized,
377 {
378 combinator::Not::new(self)
379 }
380
381 /// [Boxes] `self`, erasing its concrete type.
382 ///
383 /// This is equivalent to calling [`Box::new`], but in method form, so that
384 /// it can be used when chaining combinator methods.
385 ///
386 /// # Examples
387 ///
388 /// When different combinations of filters are used conditionally, they may
389 /// have different types. For example, the following code won't compile,
390 /// since the `if` and `else` clause produce filters of different types:
391 ///
392 /// ```compile_fail
393 /// use tracing_subscriber::{
394 /// filter::{filter_fn, LevelFilter, FilterExt},
395 /// prelude::*,
396 /// };
397 ///
398 /// let enable_bar_target: bool = // ...
399 /// # false;
400 ///
401 /// let filter = if enable_bar_target {
402 /// filter_fn(|meta| meta.target().starts_with("foo"))
403 /// // If `enable_bar_target` is true, add a `filter_fn` enabling
404 /// // spans and events with the target `bar`:
405 /// .or(filter_fn(|meta| meta.target().starts_with("bar")))
406 /// .and(LevelFilter::INFO)
407 /// } else {
408 /// filter_fn(|meta| meta.target().starts_with("foo"))
409 /// .and(LevelFilter::INFO)
410 /// };
411 ///
412 /// tracing_subscriber::registry()
413 /// .with(tracing_subscriber::fmt::layer().with_filter(filter))
414 /// .init();
415 /// ```
416 ///
417 /// By using `boxed`, the types of the two different branches can be erased,
418 /// so the assignment to the `filter` variable is valid (as both branches
419 /// have the type `Box<dyn Filter<S> + Send + Sync + 'static>`). The
420 /// following code *does* compile:
421 ///
422 /// ```
423 /// use tracing_subscriber::{
424 /// filter::{filter_fn, LevelFilter, FilterExt},
425 /// prelude::*,
426 /// };
427 ///
428 /// let enable_bar_target: bool = // ...
429 /// # false;
430 ///
431 /// let filter = if enable_bar_target {
432 /// filter_fn(|meta| meta.target().starts_with("foo"))
433 /// .or(filter_fn(|meta| meta.target().starts_with("bar")))
434 /// .and(LevelFilter::INFO)
435 /// // Boxing the filter erases its type, so both branches now
436 /// // have the same type.
437 /// .boxed()
438 /// } else {
439 /// filter_fn(|meta| meta.target().starts_with("foo"))
440 /// .and(LevelFilter::INFO)
441 /// .boxed()
442 /// };
443 ///
444 /// tracing_subscriber::registry()
445 /// .with(tracing_subscriber::fmt::layer().with_filter(filter))
446 /// .init();
447 /// ```
448 ///
449 /// [Boxes]: std::boxed
450 /// [`Box::new`]: std::boxed::Box::new
451 fn boxed(self) -> Box<dyn layer::Filter<S> + Send + Sync + 'static>
452 where
453 Self: Sized + Send + Sync + 'static,
454 {
455 Box::new(self)
456 }
457}
458
459// === impl Filter ===
460
461#[cfg(feature = "registry")]
462#[cfg_attr(docsrs, doc(cfg(feature = "registry")))]
463impl<S> layer::Filter<S> for LevelFilter {
464 fn enabled(&self, meta: &Metadata<'_>, _: &Context<'_, S>) -> bool {
465 meta.level() <= self
466 }
467
468 fn callsite_enabled(&self, meta: &'static Metadata<'static>) -> Interest {
469 if meta.level() <= self {
470 Interest::always()
471 } else {
472 Interest::never()
473 }
474 }
475
476 fn max_level_hint(&self) -> Option<LevelFilter> {
477 Some(*self)
478 }
479}
480
481macro_rules! filter_impl_body {
482 () => {
483 #[inline]
484 fn enabled(&self, meta: &Metadata<'_>, cx: &Context<'_, S>) -> bool {
485 self.deref().enabled(meta, cx)
486 }
487
488 #[inline]
489 fn callsite_enabled(&self, meta: &'static Metadata<'static>) -> Interest {
490 self.deref().callsite_enabled(meta)
491 }
492
493 #[inline]
494 fn max_level_hint(&self) -> Option<LevelFilter> {
495 self.deref().max_level_hint()
496 }
497
498 #[inline]
499 fn event_enabled(&self, event: &Event<'_>, cx: &Context<'_, S>) -> bool {
500 self.deref().event_enabled(event, cx)
501 }
502
503 #[inline]
504 fn on_new_span(&self, attrs: &span::Attributes<'_>, id: &span::Id, ctx: Context<'_, S>) {
505 self.deref().on_new_span(attrs, id, ctx)
506 }
507
508 #[inline]
509 fn on_record(&self, id: &span::Id, values: &span::Record<'_>, ctx: Context<'_, S>) {
510 self.deref().on_record(id, values, ctx)
511 }
512
513 #[inline]
514 fn on_enter(&self, id: &span::Id, ctx: Context<'_, S>) {
515 self.deref().on_enter(id, ctx)
516 }
517
518 #[inline]
519 fn on_exit(&self, id: &span::Id, ctx: Context<'_, S>) {
520 self.deref().on_exit(id, ctx)
521 }
522
523 #[inline]
524 fn on_close(&self, id: span::Id, ctx: Context<'_, S>) {
525 self.deref().on_close(id, ctx)
526 }
527 };
528}
529
530#[cfg(feature = "registry")]
531#[cfg_attr(docsrs, doc(cfg(feature = "registry")))]
532impl<S> layer::Filter<S> for Arc<dyn layer::Filter<S> + Send + Sync + 'static> {
533 filter_impl_body!();
534}
535
536#[cfg(feature = "registry")]
537#[cfg_attr(docsrs, doc(cfg(feature = "registry")))]
538impl<S> layer::Filter<S> for Box<dyn layer::Filter<S> + Send + Sync + 'static> {
539 filter_impl_body!();
540}
541
542// Implement Filter for Option<Filter> where None => allow
543#[cfg(feature = "registry")]
544#[cfg_attr(docsrs, doc(cfg(feature = "registry")))]
545impl<F, S> layer::Filter<S> for Option<F>
546where
547 F: layer::Filter<S>,
548{
549 #[inline]
550 fn enabled(&self, meta: &Metadata<'_>, ctx: &Context<'_, S>) -> bool {
551 self.as_ref()
552 .map(|inner| inner.enabled(meta, ctx))
553 .unwrap_or(true)
554 }
555
556 #[inline]
557 fn callsite_enabled(&self, meta: &'static Metadata<'static>) -> Interest {
558 self.as_ref()
559 .map(|inner| inner.callsite_enabled(meta))
560 .unwrap_or_else(Interest::always)
561 }
562
563 #[inline]
564 fn max_level_hint(&self) -> Option<LevelFilter> {
565 self.as_ref().and_then(|inner| inner.max_level_hint())
566 }
567
568 #[inline]
569 fn event_enabled(&self, event: &Event<'_>, ctx: &Context<'_, S>) -> bool {
570 self.as_ref()
571 .map(|inner| inner.event_enabled(event, ctx))
572 .unwrap_or(true)
573 }
574
575 #[inline]
576 fn on_new_span(&self, attrs: &span::Attributes<'_>, id: &span::Id, ctx: Context<'_, S>) {
577 if let Some(inner) = self {
578 inner.on_new_span(attrs, id, ctx)
579 }
580 }
581
582 #[inline]
583 fn on_record(&self, id: &span::Id, values: &span::Record<'_>, ctx: Context<'_, S>) {
584 if let Some(inner) = self {
585 inner.on_record(id, values, ctx)
586 }
587 }
588
589 #[inline]
590 fn on_enter(&self, id: &span::Id, ctx: Context<'_, S>) {
591 if let Some(inner) = self {
592 inner.on_enter(id, ctx)
593 }
594 }
595
596 #[inline]
597 fn on_exit(&self, id: &span::Id, ctx: Context<'_, S>) {
598 if let Some(inner) = self {
599 inner.on_exit(id, ctx)
600 }
601 }
602
603 #[inline]
604 fn on_close(&self, id: span::Id, ctx: Context<'_, S>) {
605 if let Some(inner) = self {
606 inner.on_close(id, ctx)
607 }
608 }
609}
610
611// === impl Filtered ===
612
613impl<L, F, S> Filtered<L, F, S> {
614 /// Wraps the provided [`Layer`] so that it is filtered by the given
615 /// [`Filter`].
616 ///
617 /// This is equivalent to calling the [`Layer::with_filter`] method.
618 ///
619 /// See the [documentation on per-layer filtering][plf] for details.
620 ///
621 /// [`Filter`]: crate::layer::Filter
622 /// [plf]: crate::layer#per-layer-filtering
623 pub fn new(layer: L, filter: F) -> Self {
624 Self {
625 layer,
626 filter,
627 id: MagicPlfDowncastMarker(FilterId::disabled()),
628 _s: PhantomData,
629 }
630 }
631
632 #[inline(always)]
633 fn id(&self) -> FilterId {
634 debug_assert!(
635 !self.id.0.is_disabled(),
636 "a `Filtered` layer was used, but it had no `FilterId`; \
637 was it registered with the subscriber?"
638 );
639 self.id.0
640 }
641
642 fn did_enable(&self, f: impl FnOnce()) {
643 FILTERING.with(|filtering| filtering.did_enable(self.id(), f))
644 }
645
646 /// Borrows the [`Filter`](crate::layer::Filter) used by this layer.
647 pub fn filter(&self) -> &F {
648 &self.filter
649 }
650
651 /// Mutably borrows the [`Filter`](crate::layer::Filter) used by this layer.
652 ///
653 /// When this layer can be mutably borrowed, this may be used to mutate the filter.
654 /// Generally, this will primarily be used with the
655 /// [`reload::Handle::modify`](crate::reload::Handle::modify) method.
656 ///
657 /// # Examples
658 ///
659 /// ```
660 /// # use tracing::info;
661 /// # use tracing_subscriber::{filter,fmt,reload,Registry,prelude::*};
662 /// # fn main() {
663 /// let filtered_layer = fmt::Layer::default().with_filter(filter::LevelFilter::WARN);
664 /// let (filtered_layer, reload_handle) = reload::Layer::new(filtered_layer);
665 /// #
666 /// # // specifying the Registry type is required
667 /// # let _: &reload::Handle<filter::Filtered<fmt::Layer<Registry>,
668 /// # filter::LevelFilter, Registry>,Registry>
669 /// # = &reload_handle;
670 /// #
671 /// info!("This will be ignored");
672 /// reload_handle.modify(|layer| *layer.filter_mut() = filter::LevelFilter::INFO);
673 /// info!("This will be logged");
674 /// # }
675 /// ```
676 pub fn filter_mut(&mut self) -> &mut F {
677 &mut self.filter
678 }
679
680 /// Borrows the inner [`Layer`] wrapped by this `Filtered` layer.
681 pub fn inner(&self) -> &L {
682 &self.layer
683 }
684
685 /// Mutably borrows the inner [`Layer`] wrapped by this `Filtered` layer.
686 ///
687 /// This method is primarily expected to be used with the
688 /// [`reload::Handle::modify`](crate::reload::Handle::modify) method.
689 ///
690 /// # Examples
691 ///
692 /// ```
693 /// # use tracing::info;
694 /// # use tracing_subscriber::{filter,fmt,reload,Registry,prelude::*};
695 /// # fn non_blocking<T: std::io::Write>(writer: T) -> (fn() -> std::io::Stdout) {
696 /// # std::io::stdout
697 /// # }
698 /// # fn main() {
699 /// let filtered_layer = fmt::layer().with_writer(non_blocking(std::io::stderr())).with_filter(filter::LevelFilter::INFO);
700 /// let (filtered_layer, reload_handle) = reload::Layer::new(filtered_layer);
701 /// #
702 /// # // specifying the Registry type is required
703 /// # let _: &reload::Handle<filter::Filtered<fmt::Layer<Registry, _, _, fn() -> std::io::Stdout>,
704 /// # filter::LevelFilter, Registry>, Registry>
705 /// # = &reload_handle;
706 /// #
707 /// info!("This will be logged to stderr");
708 /// reload_handle.modify(|layer| *layer.inner_mut().writer_mut() = non_blocking(std::io::stdout()));
709 /// info!("This will be logged to stdout");
710 /// # }
711 /// ```
712 ///
713 /// [`Layer`]: crate::layer::Layer
714 pub fn inner_mut(&mut self) -> &mut L {
715 &mut self.layer
716 }
717}
718
719impl<S, L, F> Layer<S> for Filtered<L, F, S>
720where
721 S: Subscriber + for<'span> registry::LookupSpan<'span> + 'static,
722 F: layer::Filter<S> + 'static,
723 L: Layer<S>,
724{
725 fn on_register_dispatch(&self, subscriber: &Dispatch) {
726 self.layer.on_register_dispatch(subscriber);
727 }
728
729 fn on_layer(&mut self, subscriber: &mut S) {
730 self.id = MagicPlfDowncastMarker(subscriber.register_filter());
731 self.layer.on_layer(subscriber);
732 }
733
734 // TODO(eliza): can we figure out a nice way to make the `Filtered` layer
735 // not call `is_enabled_for` in hooks that the inner layer doesn't actually
736 // have real implementations of? probably not...
737 //
738 // it would be cool if there was some wild rust reflection way of checking
739 // if a trait impl has the default impl of a trait method or not, but that's
740 // almost certainly impossible...right?
741
742 fn register_callsite(&self, metadata: &'static Metadata<'static>) -> Interest {
743 let interest = self.filter.callsite_enabled(metadata);
744
745 // If the filter didn't disable the callsite, allow the inner layer to
746 // register it — since `register_callsite` is also used for purposes
747 // such as reserving/caching per-callsite data, we want the inner layer
748 // to be able to perform any other registration steps. However, we'll
749 // ignore its `Interest`.
750 if !interest.is_never() {
751 self.layer.register_callsite(metadata);
752 }
753
754 // Add our `Interest` to the current sum of per-layer filter `Interest`s
755 // for this callsite.
756 FILTERING.with(|filtering| filtering.add_interest(interest));
757
758 // don't short circuit! if the stack consists entirely of `Layer`s with
759 // per-layer filters, the `Registry` will return the actual `Interest`
760 // value that's the sum of all the `register_callsite` calls to those
761 // per-layer filters. if we returned an actual `never` interest here, a
762 // `Layered` layer would short-circuit and not allow any `Filtered`
763 // layers below us if _they_ are interested in the callsite.
764 Interest::always()
765 }
766
767 fn enabled(&self, metadata: &Metadata<'_>, cx: Context<'_, S>) -> bool {
768 let cx = cx.with_filter(self.id());
769 let enabled = self.filter.enabled(metadata, &cx);
770 FILTERING.with(|filtering| filtering.set(self.id(), enabled));
771
772 if enabled {
773 // If the filter enabled this metadata, ask the wrapped layer if
774 // _it_ wants it --- it might have a global filter.
775 self.layer.enabled(metadata, cx)
776 } else {
777 // Otherwise, return `true`. The _per-layer_ filter disabled this
778 // metadata, but returning `false` in `Layer::enabled` will
779 // short-circuit and globally disable the span or event. This is
780 // *not* what we want for per-layer filters, as other layers may
781 // still want this event. Returning `true` here means we'll continue
782 // asking the next layer in the stack.
783 //
784 // Once all per-layer filters have been evaluated, the `Registry`
785 // at the root of the stack will return `false` from its `enabled`
786 // method if *every* per-layer filter disabled this metadata.
787 // Otherwise, the individual per-layer filters will skip the next
788 // `new_span` or `on_event` call for their layer if *they* disabled
789 // the span or event, but it was not globally disabled.
790 true
791 }
792 }
793
794 fn on_new_span(&self, attrs: &span::Attributes<'_>, id: &span::Id, cx: Context<'_, S>) {
795 self.did_enable(|| {
796 let cx = cx.with_filter(self.id());
797 self.filter.on_new_span(attrs, id, cx.clone());
798 self.layer.on_new_span(attrs, id, cx);
799 })
800 }
801
802 #[doc(hidden)]
803 fn max_level_hint(&self) -> Option<LevelFilter> {
804 self.filter.max_level_hint()
805 }
806
807 fn on_record(&self, span: &span::Id, values: &span::Record<'_>, cx: Context<'_, S>) {
808 if let Some(cx) = cx.if_enabled_for(span, self.id()) {
809 self.filter.on_record(span, values, cx.clone());
810 self.layer.on_record(span, values, cx)
811 }
812 }
813
814 fn on_follows_from(&self, span: &span::Id, follows: &span::Id, cx: Context<'_, S>) {
815 // only call `on_follows_from` if both spans are enabled by us
816 if cx.is_enabled_for(span, self.id()) && cx.is_enabled_for(follows, self.id()) {
817 self.layer
818 .on_follows_from(span, follows, cx.with_filter(self.id()))
819 }
820 }
821
822 fn event_enabled(&self, event: &Event<'_>, cx: Context<'_, S>) -> bool {
823 let cx = cx.with_filter(self.id());
824 let enabled = FILTERING
825 .with(|filtering| filtering.and(self.id(), || self.filter.event_enabled(event, &cx)));
826
827 if enabled {
828 // If the filter enabled this event, ask the wrapped subscriber if
829 // _it_ wants it --- it might have a global filter.
830 self.layer.event_enabled(event, cx)
831 } else {
832 // Otherwise, return `true`. See the comment in `enabled` for why this
833 // is necessary.
834 true
835 }
836 }
837
838 fn on_event(&self, event: &Event<'_>, cx: Context<'_, S>) {
839 self.did_enable(|| {
840 self.layer.on_event(event, cx.with_filter(self.id()));
841 })
842 }
843
844 fn on_enter(&self, id: &span::Id, cx: Context<'_, S>) {
845 if let Some(cx) = cx.if_enabled_for(id, self.id()) {
846 self.filter.on_enter(id, cx.clone());
847 self.layer.on_enter(id, cx);
848 }
849 }
850
851 fn on_exit(&self, id: &span::Id, cx: Context<'_, S>) {
852 if let Some(cx) = cx.if_enabled_for(id, self.id()) {
853 self.filter.on_exit(id, cx.clone());
854 self.layer.on_exit(id, cx);
855 }
856 }
857
858 fn on_close(&self, id: span::Id, cx: Context<'_, S>) {
859 if let Some(cx) = cx.if_enabled_for(&id, self.id()) {
860 self.filter.on_close(id.clone(), cx.clone());
861 self.layer.on_close(id, cx);
862 }
863 }
864
865 // XXX(eliza): the existence of this method still makes me sad...
866 fn on_id_change(&self, old: &span::Id, new: &span::Id, cx: Context<'_, S>) {
867 if let Some(cx) = cx.if_enabled_for(old, self.id()) {
868 self.layer.on_id_change(old, new, cx)
869 }
870 }
871
872 #[doc(hidden)]
873 #[inline]
874 unsafe fn downcast_raw(&self, id: TypeId) -> Option<*const ()> {
875 match id {
876 id if id == TypeId::of::<Self>() => Some(self as *const _ as *const ()),
877 id if id == TypeId::of::<L>() => Some(&self.layer as *const _ as *const ()),
878 id if id == TypeId::of::<F>() => Some(&self.filter as *const _ as *const ()),
879 id if id == TypeId::of::<MagicPlfDowncastMarker>() => {
880 Some(&self.id as *const _ as *const ())
881 }
882 _ => self.layer.downcast_raw(id),
883 }
884 }
885}
886
887impl<F, L, S> fmt::Debug for Filtered<F, L, S>
888where
889 F: fmt::Debug,
890 L: fmt::Debug,
891{
892 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
893 f.debug_struct("Filtered")
894 .field("filter", &self.filter)
895 .field("layer", &self.layer)
896 .field("id", &self.id)
897 .finish()
898 }
899}
900
901// === impl FilterId ===
902
903impl FilterId {
904 const fn disabled() -> Self {
905 Self(u64::MAX)
906 }
907
908 /// Returns a `FilterId` that will consider _all_ spans enabled.
909 pub(crate) const fn none() -> Self {
910 Self(0)
911 }
912
913 pub(crate) fn new(id: u8) -> Self {
914 assert!(id < 64, "filter IDs may not be greater than 64");
915 Self(1 << id as usize)
916 }
917
918 /// Combines two `FilterId`s, returning a new `FilterId` that will match a
919 /// [`FilterMap`] where the span was disabled by _either_ this `FilterId`
920 /// *or* the combined `FilterId`.
921 ///
922 /// This method is called by [`Context`]s when adding the `FilterId` of a
923 /// [`Filtered`] layer to the context.
924 ///
925 /// This is necessary for cases where we have a tree of nested [`Filtered`]
926 /// layers, like this:
927 ///
928 /// ```text
929 /// Filtered {
930 /// filter1,
931 /// Layered {
932 /// layer1,
933 /// Filtered {
934 /// filter2,
935 /// layer2,
936 /// },
937 /// }
938 /// ```
939 ///
940 /// We want `layer2` to be affected by both `filter1` _and_ `filter2`.
941 /// Without combining `FilterId`s, this works fine when filtering
942 /// `on_event`/`new_span`, because the outer `Filtered` layer (`filter1`)
943 /// won't call the inner layer's `on_event` or `new_span` callbacks if it
944 /// disabled the event/span.
945 ///
946 /// However, it _doesn't_ work when filtering span lookups and traversals
947 /// (e.g. `scope`). This is because the [`Context`] passed to `layer2`
948 /// would set its filter ID to the filter ID of `filter2`, and would skip
949 /// spans that were disabled by `filter2`. However, what if a span was
950 /// disabled by `filter1`? We wouldn't see it in `new_span`, but we _would_
951 /// see it in lookups and traversals...which we don't want.
952 ///
953 /// When a [`Filtered`] layer adds its ID to a [`Context`], it _combines_ it
954 /// with any previous filter ID that the context had, rather than replacing
955 /// it. That way, `layer2`'s context will check if a span was disabled by
956 /// `filter1` _or_ `filter2`. The way we do this, instead of representing
957 /// `FilterId`s as a number number that we shift a 1 over by to get a mask,
958 /// we just store the actual mask,so we can combine them with a bitwise-OR.
959 ///
960 /// For example, if we consider the following case (pretending that the
961 /// masks are 8 bits instead of 64 just so i don't have to write out a bunch
962 /// of extra zeroes):
963 ///
964 /// - `filter1` has the filter id 1 (`0b0000_0001`)
965 /// - `filter2` has the filter id 2 (`0b0000_0010`)
966 ///
967 /// A span that gets disabled by filter 1 would have the [`FilterMap`] with
968 /// bits `0b0000_0001`.
969 ///
970 /// If the `FilterId` was internally represented as `(bits to shift + 1),
971 /// when `layer2`'s [`Context`] checked if it enabled the span, it would
972 /// make the mask `0b0000_0010` (`1 << 1`). That bit would not be set in the
973 /// [`FilterMap`], so it would see that it _didn't_ disable the span. Which
974 /// is *true*, it just doesn't reflect the tree-like shape of the actual
975 /// subscriber.
976 ///
977 /// By having the IDs be masks instead of shifts, though, when the
978 /// [`Filtered`] with `filter2` gets the [`Context`] with `filter1`'s filter ID,
979 /// instead of replacing it, it ors them together:
980 ///
981 /// ```ignore
982 /// 0b0000_0001 | 0b0000_0010 == 0b0000_0011;
983 /// ```
984 ///
985 /// We then test if the span was disabled by seeing if _any_ bits in the
986 /// mask are `1`:
987 ///
988 /// ```ignore
989 /// filtermap & mask != 0;
990 /// 0b0000_0001 & 0b0000_0011 != 0;
991 /// 0b0000_0001 != 0;
992 /// true;
993 /// ```
994 ///
995 /// [`Context`]: crate::layer::Context
996 pub(crate) fn and(self, FilterId(other): Self) -> Self {
997 // If this mask is disabled, just return the other --- otherwise, we
998 // would always see that every span is disabled.
999 if self.0 == Self::disabled().0 {
1000 return Self(other);
1001 }
1002
1003 Self(self.0 | other)
1004 }
1005
1006 fn is_disabled(self) -> bool {
1007 self.0 == Self::disabled().0
1008 }
1009}
1010
1011impl fmt::Debug for FilterId {
1012 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1013 // don't print a giant set of the numbers 0..63 if the filter ID is disabled.
1014 if self.0 == Self::disabled().0 {
1015 return f
1016 .debug_tuple("FilterId")
1017 .field(&format_args!("DISABLED"))
1018 .finish();
1019 }
1020
1021 if f.alternate() {
1022 f.debug_struct("FilterId")
1023 .field("ids", &format_args!("{:?}", FmtBitset(self.0)))
1024 .field("bits", &format_args!("{:b}", self.0))
1025 .finish()
1026 } else {
1027 f.debug_tuple("FilterId").field(&FmtBitset(self.0)).finish()
1028 }
1029 }
1030}
1031
1032impl fmt::Binary for FilterId {
1033 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1034 f.debug_tuple("FilterId")
1035 .field(&format_args!("{:b}", self.0))
1036 .finish()
1037 }
1038}
1039
1040// === impl FilterExt ===
1041
1042impl<F, S> FilterExt<S> for F where F: layer::Filter<S> {}
1043
1044// === impl FilterMap ===
1045
1046impl FilterMap {
1047 pub(crate) fn set(self, FilterId(mask): FilterId, enabled: bool) -> Self {
1048 if mask == u64::MAX {
1049 return self;
1050 }
1051
1052 if enabled {
1053 Self {
1054 bits: self.bits & (!mask),
1055 }
1056 } else {
1057 Self {
1058 bits: self.bits | mask,
1059 }
1060 }
1061 }
1062
1063 #[inline]
1064 pub(crate) fn is_enabled(self, FilterId(mask): FilterId) -> bool {
1065 self.bits & mask == 0
1066 }
1067
1068 #[inline]
1069 pub(crate) fn any_enabled(self) -> bool {
1070 self.bits != u64::MAX
1071 }
1072}
1073
1074impl fmt::Debug for FilterMap {
1075 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1076 let alt = f.alternate();
1077 let mut s = f.debug_struct("FilterMap");
1078 s.field("disabled_by", &format_args!("{:?}", &FmtBitset(self.bits)));
1079
1080 if alt {
1081 s.field("bits", &format_args!("{:b}", self.bits));
1082 }
1083
1084 s.finish()
1085 }
1086}
1087
1088impl fmt::Binary for FilterMap {
1089 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1090 f.debug_struct("FilterMap")
1091 .field("bits", &format_args!("{:b}", self.bits))
1092 .finish()
1093 }
1094}
1095
1096// === impl FilterState ===
1097
1098impl FilterState {
1099 const fn new() -> Self {
1100 Self {
1101 enabled: Cell::new(FilterMap::new()),
1102 interest: RefCell::new(None),
1103
1104 #[cfg(debug_assertions)]
1105 counters: DebugCounters::new(),
1106 }
1107 }
1108
1109 fn set(&self, filter: FilterId, enabled: bool) {
1110 #[cfg(debug_assertions)]
1111 {
1112 let in_current_pass = self.counters.in_filter_pass.get();
1113 if in_current_pass == 0 {
1114 debug_assert_eq!(self.enabled.get(), FilterMap::new());
1115 }
1116 self.counters.in_filter_pass.set(in_current_pass + 1);
1117 debug_assert_eq!(
1118 self.counters.in_interest_pass.get(),
1119 0,
1120 "if we are in or starting a filter pass, we must not be in an interest pass."
1121 )
1122 }
1123
1124 self.enabled.set(self.enabled.get().set(filter, enabled))
1125 }
1126
1127 fn add_interest(&self, interest: Interest) {
1128 let mut curr_interest = self.interest.borrow_mut();
1129
1130 #[cfg(debug_assertions)]
1131 {
1132 let in_current_pass = self.counters.in_interest_pass.get();
1133 if in_current_pass == 0 {
1134 debug_assert!(curr_interest.is_none());
1135 }
1136 self.counters.in_interest_pass.set(in_current_pass + 1);
1137 }
1138
1139 if let Some(curr_interest) = curr_interest.as_mut() {
1140 if (curr_interest.is_always() && !interest.is_always())
1141 || (curr_interest.is_never() && !interest.is_never())
1142 {
1143 *curr_interest = Interest::sometimes();
1144 }
1145 // If the two interests are the same, do nothing. If the current
1146 // interest is `sometimes`, stay sometimes.
1147 } else {
1148 *curr_interest = Some(interest);
1149 }
1150 }
1151
1152 pub(crate) fn event_enabled() -> bool {
1153 FILTERING
1154 .try_with(|this| {
1155 let enabled = this.enabled.get().any_enabled();
1156 #[cfg(debug_assertions)]
1157 {
1158 if this.counters.in_filter_pass.get() == 0 {
1159 debug_assert_eq!(this.enabled.get(), FilterMap::new());
1160 }
1161
1162 // Nothing enabled this event, we won't tick back down the
1163 // counter in `did_enable`. Reset it.
1164 if !enabled {
1165 this.counters.in_filter_pass.set(0);
1166 }
1167 }
1168 enabled
1169 })
1170 .unwrap_or(true)
1171 }
1172
1173 /// Executes a closure if the filter with the provided ID did not disable
1174 /// the current span/event.
1175 ///
1176 /// This is used to implement the `on_event` and `new_span` methods for
1177 /// `Filtered`.
1178 fn did_enable(&self, filter: FilterId, f: impl FnOnce()) {
1179 let map = self.enabled.get();
1180 if map.is_enabled(filter) {
1181 // If the filter didn't disable the current span/event, run the
1182 // callback.
1183 f();
1184 } else {
1185 // Otherwise, if this filter _did_ disable the span or event
1186 // currently being processed, clear its bit from this thread's
1187 // `FilterState`. The bit has already been "consumed" by skipping
1188 // this callback, and we need to ensure that the `FilterMap` for
1189 // this thread is reset when the *next* `enabled` call occurs.
1190 self.enabled.set(map.set(filter, true));
1191 }
1192 #[cfg(debug_assertions)]
1193 {
1194 let in_current_pass = self.counters.in_filter_pass.get();
1195 if in_current_pass <= 1 {
1196 debug_assert_eq!(self.enabled.get(), FilterMap::new());
1197 }
1198 self.counters
1199 .in_filter_pass
1200 .set(in_current_pass.saturating_sub(1));
1201 debug_assert_eq!(
1202 self.counters.in_interest_pass.get(),
1203 0,
1204 "if we are in a filter pass, we must not be in an interest pass."
1205 )
1206 }
1207 }
1208
1209 /// Run a second filtering pass, e.g. for Layer::event_enabled.
1210 fn and(&self, filter: FilterId, f: impl FnOnce() -> bool) -> bool {
1211 let map = self.enabled.get();
1212 let enabled = map.is_enabled(filter) && f();
1213 self.enabled.set(map.set(filter, enabled));
1214 enabled
1215 }
1216
1217 /// Clears the current in-progress filter state.
1218 ///
1219 /// This resets the [`FilterMap`] and current [`Interest`] as well as
1220 /// clearing the debug counters.
1221 pub(crate) fn clear_enabled() {
1222 // Drop the `Result` returned by `try_with` --- if we are in the middle
1223 // a panic and the thread-local has been torn down, that's fine, just
1224 // ignore it ratehr than panicking.
1225 let _ = FILTERING.try_with(|filtering| {
1226 filtering.enabled.set(FilterMap::new());
1227
1228 #[cfg(debug_assertions)]
1229 filtering.counters.in_filter_pass.set(0);
1230 });
1231 }
1232
1233 pub(crate) fn take_interest() -> Option<Interest> {
1234 FILTERING
1235 .try_with(|filtering| {
1236 #[cfg(debug_assertions)]
1237 {
1238 if filtering.counters.in_interest_pass.get() == 0 {
1239 debug_assert!(filtering.interest.try_borrow().ok()?.is_none());
1240 }
1241 filtering.counters.in_interest_pass.set(0);
1242 }
1243 filtering.interest.try_borrow_mut().ok()?.take()
1244 })
1245 .ok()?
1246 }
1247
1248 pub(crate) fn filter_map(&self) -> FilterMap {
1249 let map = self.enabled.get();
1250 #[cfg(debug_assertions)]
1251 if self.counters.in_filter_pass.get() == 0 {
1252 debug_assert_eq!(map, FilterMap::new());
1253 }
1254
1255 map
1256 }
1257}
1258/// This is a horrible and bad abuse of the downcasting system to expose
1259/// *internally* whether a layer has per-layer filtering, within
1260/// `tracing-subscriber`, without exposing a public API for it.
1261///
1262/// If a `Layer` has per-layer filtering, it will downcast to a
1263/// `MagicPlfDowncastMarker`. Since layers which contain other layers permit
1264/// downcasting to recurse to their children, this will do the Right Thing with
1265/// layers like Reload, Option, etc.
1266///
1267/// Why is this a wrapper around the `FilterId`, you may ask? Because
1268/// downcasting works by returning a pointer, and we don't want to risk
1269/// introducing UB by constructing pointers that _don't_ point to a valid
1270/// instance of the type they claim to be. In this case, we don't _intend_ for
1271/// this pointer to be dereferenced, so it would actually be fine to return one
1272/// that isn't a valid pointer...but we can't guarantee that the caller won't
1273/// (accidentally) dereference it, so it's better to be safe than sorry. We
1274/// could, alternatively, add an additional field to the type that's used only
1275/// for returning pointers to as as part of the evil downcasting hack, but I
1276/// thought it was nicer to just add a `repr(transparent)` wrapper to the
1277/// existing `FilterId` field, since it won't make the struct any bigger.
1278///
1279/// Don't worry, this isn't on the test. :)
1280#[derive(Clone, Copy)]
1281#[repr(transparent)]
1282struct MagicPlfDowncastMarker(FilterId);
1283impl fmt::Debug for MagicPlfDowncastMarker {
1284 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1285 // Just pretend that `MagicPlfDowncastMarker` doesn't exist for
1286 // `fmt::Debug` purposes...if no one *sees* it in their `Debug` output,
1287 // they don't have to know I thought this code would be a good idea.
1288 fmt::Debug::fmt(&self.0, f)
1289 }
1290}
1291
1292pub(crate) fn is_plf_downcast_marker(type_id: TypeId) -> bool {
1293 type_id == TypeId::of::<MagicPlfDowncastMarker>()
1294}
1295
1296/// Does a type implementing `Subscriber` contain any per-layer filters?
1297pub(crate) fn subscriber_has_plf<S>(subscriber: &S) -> bool
1298where
1299 S: Subscriber,
1300{
1301 (subscriber as &dyn Subscriber).is::<MagicPlfDowncastMarker>()
1302}
1303
1304/// Does a type implementing `Layer` contain any per-layer filters?
1305pub(crate) fn layer_has_plf<L, S>(layer: &L) -> bool
1306where
1307 L: Layer<S>,
1308 S: Subscriber,
1309{
1310 unsafe {
1311 // Safety: we're not actually *doing* anything with this pointer --- we
1312 // only care about the `Option`, which we're turning into a `bool`. So
1313 // even if the layer decides to be evil and give us some kind of invalid
1314 // pointer, we don't ever dereference it, so this is always safe.
1315 layer.downcast_raw(TypeId::of::<MagicPlfDowncastMarker>())
1316 }
1317 .is_some()
1318}
1319
1320struct FmtBitset(u64);
1321
1322impl fmt::Debug for FmtBitset {
1323 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1324 let mut set = f.debug_set();
1325 for bit in 0..64 {
1326 // if the `bit`-th bit is set, add it to the debug set
1327 if self.0 & (1 << bit) != 0 {
1328 set.entry(&bit);
1329 }
1330 }
1331 set.finish()
1332 }
1333}