bitvec::ptr

Struct BitPtr

Source
#[repr(C, packed(1))]
pub struct BitPtr<M = Const, T = usize, O = Lsb0>
where M: Mutability, T: BitStore, O: BitOrder,
{ /* private fields */ }
Expand description

§Single-Bit Pointer

This structure defines a pointer to exactly one bit in a memory element. It is a structure, rather than an encoding of a *Bit raw pointer, because it contains more information than can be packed into such a pointer. Furthermore, it can uphold the same requirements and guarantees that the rest of the crate demands, whereäs a raw pointer cannot.

§Original

*bool and NonNull<bool>

§API Differences

Since raw pointers are not sufficient in space or guarantees, and are limited by not being marked #[fundamental], this is an ordinary struct. Because it cannot use the *const/*mut distinction that raw pointers and references can, this encodes mutability in a type parameter instead.

In order to be consistent with the rest of the crate, particularly the *BitSlice encoding, this enforces that all T element addresses are well-aligned to T and non-null. While this type is used in the API as an analogue of raw pointers, it is restricted in value to only contain the values of valid references to memory, not arbitrary pointers.

§ABI Differences

This is aligned to 1, rather than the processor word, in order to enable some crate-internal space optimizations.

§Type Parameters

  • M: Marks whether the pointer has mutability permissions to the referent memory. Only Mut pointers can be used to create &mut references.
  • T: A memory type used to select both the register width and the bus behavior when performing memory accesses.
  • O: The ordering of bits within a memory element.

§Usage

This structure is used as the bitvec equivalent to *bool. It is used in all raw-pointer APIs and provides behavior to emulate raw pointers. It cannot be directly dereferenced, as it is not a pointer; it can only be transformed back into higher referential types, or used in functions that accept it.

These pointers can never be null or misaligned.

§Safety

Rust and LLVM do not have a concept of bit-level initialization yet. Furthermore, the underlying foundational code that this type uses to manipulate individual bits in memory relies on construction of shared references to memory, which means that unlike standard pointers, the T element to which BitPtr values point must always be already initialized in your program context.

bitvec is not able to detect or enforce this requirement, and is currently not able to avoid it. See BitAccess for more information.

Implementations§

Source§

impl<M, T, O> BitPtr<M, T, O>
where M: Mutability, T: BitStore, O: BitOrder,

Source

pub const DANGLING: Self = _

The canonical dangling pointer. This selects the starting bit of the canonical dangling pointer for T.

Source

pub fn new( ptr: Address<M, T>, bit: BitIdx<T::Mem>, ) -> Result<Self, MisalignError<T>>

Tries to construct a BitPtr from a memory location and a bit index.

§Parameters
  • ptr: The address of a memory element. Address wraps raw pointers or references, and enforces that they are not null. BitPtr additionally requires that the address be well-aligned to its type; misaligned addresses cause this to return an error.
  • bit: The index of the selected bit within *ptr.
§Returns

This returns an error if ptr is not aligned to T; otherwise, it returns a new bit-pointer structure to the given element and bit.

You should typically prefer to use constructors that take directly from a memory reference or pointer, such as the TryFrom<*T> implementations, the From<&/mut T> implementations, or the ::from_ref(), ::from_mut(), ::from_slice(), or ::from_slice_mut() functions.

Source

pub unsafe fn new_unchecked(ptr: Address<M, T>, bit: BitIdx<T::Mem>) -> Self

Constructs a BitPtr from an address and head index, without checking the address for validity.

§Parameters
  • addr: The memory address to use in the bit-pointer. See the Safety section.
  • head: The index of the bit in *addr that this bit-pointer selects.
§Returns

A new bit-pointer composed of the parameters. No validity checking is performed.

§Safety

The Address type imposes a non-null requirement. BitPtr additionally requires that addr is well-aligned for T, and presumes that the caller has ensured this with bv_ptr::check_alignment. If this is not the case, then the program is incorrect, and subsequent behavior is not specified.

Source

pub fn address(self) -> Address<M, T>

Gets the address of the base storage element.

Source

pub fn bit(self) -> BitIdx<T::Mem>

Gets the BitIdx that selects the bit within the memory element.

Source

pub fn raw_parts(self) -> (Address<M, T>, BitIdx<T::Mem>)

Decomposes a bit-pointer into its element address and bit index.

§Parameters
  • self
§Returns
  • .0: The memory address in which the referent bit is located.
  • .1: The index of the referent bit in *.0 according to the O type parameter.
Source

pub fn to_const(self) -> BitPtr<Const, T, O>

Removes write permissions from a bit-pointer.

Source

pub unsafe fn to_mut(self) -> BitPtr<Mut, T, O>

Adds write permissions to a bit-pointer.

§Safety

This pointer must have been derived from a *mut pointer.

Source§

impl<T, O> BitPtr<Const, T, O>
where T: BitStore, O: BitOrder,

Source

pub fn from_ref(elem: &T) -> Self

Constructs a BitPtr to the zeroth bit in a single element.

Source

pub fn from_slice(slice: &[T]) -> Self

Constructs a BitPtr to the zeroth bit in the zeroth element of a slice.

This method is distinct from Self::from_ref(&elem[0]), because it ensures that the returned bit-pointer has provenance over the entire slice. Indexing within a slice narrows the provenance range, and makes departure from the subslice, even within the original slice, illegal.

Source

pub fn pointer(&self) -> *const T

Gets a raw pointer to the memory element containing the selected bit.

Source§

impl<T, O> BitPtr<Mut, T, O>
where T: BitStore, O: BitOrder,

Source

pub fn from_mut(elem: &mut T) -> Self

Constructs a mutable BitPtr to the zeroth bit in a single element.

Source

pub fn from_mut_slice(slice: &mut [T]) -> Self

Constructs a BitPtr to the zeroth bit in the zeroth element of a mutable slice.

This method is distinct from Self::from_mut(&mut elem[0]), because it ensures that the returned bit-pointer has provenance over the entire slice. Indexing within a slice narrows the provenance range, and makes departure from the subslice, even within the original slice, illegal.

Source

pub fn from_slice_mut(slice: &mut [T]) -> Self

Constructs a mutable BitPtr to the zeroth bit in the zeroth element of a slice.

This method is distinct from Self::from_mut(&mut elem[0]), because it ensures that the returned bit-pointer has provenance over the entire slice. Indexing within a slice narrows the provenance range, and makes departure from the subslice, even within the original slice, illegal.

Source

pub fn pointer(&self) -> *mut T

Gets a raw pointer to the memory location containing the selected bit.

Source§

impl<M, T, O> BitPtr<M, T, O>
where M: Mutability, T: BitStore, O: BitOrder,

Port of the *bool inherent API.

Source

pub fn is_null(self) -> bool

👎Deprecated: BitPtr is never null

Tests if a bit-pointer is the null value.

This is always false, as a BitPtr is a NonNull internally. Use Option<BitPtr> to express the potential for a null pointer.

§Original

pointer::is_null

Source

pub fn cast<U>(self) -> BitPtr<M, U, O>
where U: BitStore,

Casts to a BitPtr with a different storage parameter.

This is not free! In order to maintain value integrity, it encodes a BitSpan encoded descriptor with its value, casts that, then decodes into a BitPtr of the target type. If T and U have different ::Mem associated types, then this may change the selected bit in memory. This is an unavoidable cost of the addressing and encoding schemes.

§Original

pointer::cast

Source

pub fn to_raw_parts(self) -> (Address<M, T>, BitIdx<T::Mem>)

Decomposes a bit-pointer into its address and head-index components.

§Original

pointer::to_raw_parts

§API Differences

The original method is unstable as of 1.54.0; however, because BitPtr already has a similar API, the name is optimistically stabilized here. Prefer .raw_parts() until the original inherent stabilizes.

Source

pub unsafe fn as_ref<'a>(self) -> Option<BitRef<'a, Const, T, O>>

Produces a proxy reference to the referent bit.

Because BitPtr guarantees that it is non-null and well-aligned, this never returns None. However, this is still unsafe to call on any bit-pointers created from conjured values rather than known references.

§Original

pointer::as_ref

§API Differences

This produces a proxy type rather than a true reference. The proxy implements Deref<Target = bool>, and can be converted to &bool with a reborrow &*.

§Safety

Since BitPtr does not permit null or misaligned pointers, this method will always dereference the pointer in order to create the proxy. As such, you must ensure the following conditions are met:

  • the pointer must be dereferenceable as defined in the standard library documentation
  • the pointer must point to an initialized instance of T
  • you must ensure that no other pointer will race to modify the referent location while this call is reading from memory to produce the proxy
§Examples
use bitvec::prelude::*;

let data = 1u8;
let ptr = BitPtr::<_, _, Lsb0>::from_ref(&data);
let val = unsafe { ptr.as_ref() }.unwrap();
assert!(*val);
Source

pub unsafe fn offset(self, count: isize) -> Self

Creates a new bit-pointer at a specified offset from the original.

count is in units of bits.

§Original

pointer::offset

§Safety

BitPtr is implemented with Rust raw pointers internally, and is subject to all of Rust’s rules about provenance and permission tracking. You must abide by the safety rules established in the original method, to which this internally delegates.

Additionally, bitvec imposes its own rules: while Rust cannot observe provenance beyond an element or byte level, bitvec demands that &mut BitSlice have exclusive view over all bits it observes. You must not produce a bit-pointer that departs a BitSlice region and intrudes on any &mut BitSlice’s handle, and you must not produce a write-capable bit-pointer that intrudes on a &BitSlice handle that expects its contents to be immutable.

Note that it is illegal to construct a bit-pointer that invalidates any of these rules. If you wish to defer safety-checking to the point of dereferencing, and allow the temporary construction but not dereference of illegal BitPtrs, use .wrapping_offset() instead.

§Examples
use bitvec::prelude::*;

let data = 5u8;
let ptr = BitPtr::<_, _, Lsb0>::from_ref(&data);
unsafe {
  assert!(ptr.read());
  assert!(!ptr.offset(1).read());
  assert!(ptr.offset(2).read());
}
Source

pub fn wrapping_offset(self, count: isize) -> Self

Creates a new bit-pointer at a specified offset from the original.

count is in units of bits.

§Original

pointer::wrapping_offset

§API Differences

bitvec makes it explicitly illegal to wrap a pointer around the high end of the address space, because it is incapable of representing a null pointer.

However, <*T>::wrapping_offset has additional properties as a result of its tolerance for wrapping the address space: it tolerates departing a provenance region, and is not unsafe to use to create a bit-pointer that is outside the bounds of its original provenance.

§Safety

This function is safe to use because the bit-pointers it creates defer their provenance checks until the point of dereference. As such, you can safely use this to perform arbitrary pointer arithmetic that Rust considers illegal in ordinary arithmetic, as long as you do not dereference the bit-pointer until it has been brought in bounds of the originating provenance region.

This means that, to the Rust rule engine, let z = x.wrapping_add(y as usize).wrapping_sub(x as usize); is not equivalent to y, but z is safe to construct, and z.wrapping_add(x as usize).wrapping_sub(y as usize) produces a bit-pointer that is equivalent to x.

See the documentation of the original method for more details about provenance regions, and the distinctions that the optimizer makes about them.

§Examples
use bitvec::prelude::*;

let data = 0u32;
let mut ptr = BitPtr::<_, _, Lsb0>::from_ref(&data);
let end = ptr.wrapping_offset(32);
while ptr < end {
  println!("{}", unsafe { ptr.read() });
  ptr = ptr.wrapping_offset(3);
}
Source

pub unsafe fn offset_from<U>(self, origin: BitPtr<M, U, O>) -> isize
where U: BitStore<Mem = T::Mem>,

Calculates the distance (in bits) between two bit-pointers.

This method is the inverse of .offset().

§Original

pointer::offset_from

§API Differences

The base pointer may have a different BitStore type parameter, as long as they share an underlying memory type. This is necessary in order to accommodate aliasing markers introduced between when an origin pointer was taken and when self compared against it.

§Safety

Both self and origin must be drawn from the same provenance region. This means that they must be created from the same Rust allocation, whether with let or the allocator API, and must be in the (inclusive) range base ..= base + len. The first bit past the end of a region can be addressed, just not dereferenced.

See the original <*T>::offset_from for more details on region safety.

§Examples
use bitvec::prelude::*;

let data = 0u32;
let base = BitPtr::<_, _, Lsb0>::from_ref(&data);
let low = unsafe { base.add(10) };
let high = unsafe { low.add(15) };
unsafe {
  assert_eq!(high.offset_from(low), 15);
  assert_eq!(low.offset_from(high), -15);
  assert_eq!(low.offset(15), high);
  assert_eq!(high.offset(-15), low);
}

While this method is safe to construct bit-pointers that depart a provenance region, it remains illegal to dereference those pointers!

This usage is incorrect, and a program that contains it is not well-formed.

use bitvec::prelude::*;

let a = 0u8;
let b = !0u8;

let a_ptr = BitPtr::<_, _, Lsb0>::from_ref(&a);
let b_ptr = BitPtr::<_, _, Lsb0>::from_ref(&b);
let diff = (b_ptr.pointer() as isize)
  .wrapping_sub(a_ptr.pointer() as isize)
  // Remember: raw pointers are byte-stepped,
  // but bit-pointers are bit-stepped.
  .wrapping_mul(8);
// This pointer to `b` has `a`’s provenance:
let b_ptr_2 = a_ptr.wrapping_offset(diff);

// They are *arithmetically* equal:
assert_eq!(b_ptr, b_ptr_2);
// But it is still undefined behavior to cross provenances!
assert_eq!(0, unsafe { b_ptr_2.offset_from(b_ptr) });
Source

pub unsafe fn add(self, count: usize) -> Self

Adjusts a bit-pointer upwards in memory. This is equivalent to .offset(count as isize).

count is in units of bits.

§Original

pointer::add

§Safety

See .offset().

Source

pub unsafe fn sub(self, count: usize) -> Self

Adjusts a bit-pointer downwards in memory. This is equivalent to .offset((count as isize).wrapping_neg()).

count is in units of bits.

§Original

pointer::sub

§Safety

See .offset().

Source

pub fn wrapping_add(self, count: usize) -> Self

Adjusts a bit-pointer upwards in memory, using wrapping semantics. This is equivalent to .wrapping_offset(count as isize).

count is in units of bits.

§Original

pointer::wrapping_add

§Safety

See .wrapping_offset().

Source

pub fn wrapping_sub(self, count: usize) -> Self

Adjusts a bit-pointer downwards in memory, using wrapping semantics. This is equivalent to .wrapping_offset((count as isize).wrapping_neg()).

count is in units of bits.

§Original

pointer::wrapping_add

§Safety

See .wrapping_offset().

Source

pub unsafe fn read(self) -> bool

Reads the bit from *self.

§Original

pointer::read

§Safety

See ptr::read.

Source

pub unsafe fn read_volatile(self) -> bool

Reads the bit from *self using a volatile load.

Prefer using a crate such as voladdress to manage volatile I/O and use bitvec only on the local objects it provides. Individual I/O operations for individual bits are likely not the behavior you want.

§Original

pointer::read_volatile

§Safety

See ptr::read_volatile.

Source

pub unsafe fn read_unaligned(self) -> bool

👎Deprecated: BitPtr does not have unaligned addresses

Reads the bit from *self using an unaligned memory access.

BitPtr forbids unaligned addresses. If you have such an address, you must perform your memory accesses on the raw element, and only use bitvec on a well-aligned stack temporary. This method should never be necessary.

§Original

pointer::read_unaligned

§Safety

See ptr::read_unaligned

Source

pub unsafe fn copy_to<T2, O2>(self, dest: BitPtr<Mut, T2, O2>, count: usize)
where T2: BitStore, O2: BitOrder,

Copies count bits from self to dest. The source and destination may overlap.

Note that overlap is only defined when O and O2 are the same type. If they differ, then bitvec does not define overlap, and assumes that they are wholly discrete in memory.

§Original

pointer::copy_to

§Safety

See ptr::copy.

Source

pub unsafe fn copy_to_nonoverlapping<T2, O2>( self, dest: BitPtr<Mut, T2, O2>, count: usize, )
where T2: BitStore, O2: BitOrder,

Copies count bits from self to dest. The source and destination may not overlap.

§Original

pointer::copy_to_nonoverlapping

§Safety

See ptr::copy_nonoverlapping.

Source

pub fn align_offset(self, align: usize) -> usize

Computes the offset (in bits) that needs to be applied to the bit-pointer in order to make it aligned to the given byte alignment.

“Alignment” here means that the bit-pointer selects the starting bit of a memory location whose address satisfies the requested alignment.

align is measured in bytes. If you wish to align your bit-pointer to a specific fraction (½, ¼, or ⅛ of one byte), please file an issue and I will work on adding this functionality.

§Original

pointer::align_offset

§Notes

If the base-element address of the bit-pointer is already aligned to align, then this will return the bit-offset required to select the first bit of the successor element.

If it is not possible to align the bit-pointer, then the implementation returns usize::MAX.

The return value is measured in bits, not T elements or bytes. The only thing you can do with it is pass it into .add() or .wrapping_add().

Note from the standard library: It is permissible for the implementation to always return usize::MAX. Only your algorithm’s performance can depend on getting a usable offset here; it must be correct independently of this function providing a useful value.

§Safety

There are no guarantees whatsoëver that offsetting the bit-pointer will not overflow or go beyond the allocation that the bit-pointer selects. It is up to the caller to ensure that the returned offset is correct in all terms other than alignment.

§Panics

This method panics if align is not a power of two.

§Examples
use bitvec::prelude::*;

let data = [0u8; 3];
let ptr = BitPtr::<_, _, Lsb0>::from_slice(&data);
let ptr = unsafe { ptr.add(2) };
let count = ptr.align_offset(2);
assert!(count >= 6);
Source§

impl<T, O> BitPtr<Mut, T, O>
where T: BitStore, O: BitOrder,

Port of the *mut bool inherent API.

Source

pub unsafe fn as_mut<'a>(self) -> Option<BitRef<'a, Mut, T, O>>

Produces a proxy reference to the referent bit.

Because BitPtr guarantees that it is non-null and well-aligned, this never returns None. However, this is still unsafe to call on any bit-pointers created from conjured values rather than known references.

§Original

pointer::as_mut

§API Differences

This produces a proxy type rather than a true reference. The proxy implements DerefMut<Target = bool>, and can be converted to &mut bool with a reborrow &mut *.

Writes to the proxy are not reflected in the proxied location until the proxy is destroyed, either through Drop or its .commit() method.

§Safety

Since BitPtr does not permit null or misaligned pointers, this method will always dereference the pointer in order to create the proxy. As such, you must ensure the following conditions are met:

  • the pointer must be dereferenceable as defined in the standard library documentation
  • the pointer must point to an initialized instance of T
  • you must ensure that no other pointer will race to modify the referent location while this call is reading from memory to produce the proxy
  • you must ensure that no other bitvec handle targets the referent bit
§Examples
use bitvec::prelude::*;

let mut data = 0u8;
let ptr = BitPtr::<_, _, Lsb0>::from_mut(&mut data);
let mut val = unsafe { ptr.as_mut() }.unwrap();
assert!(!*val);
*val = true;
assert!(*val);
Source

pub unsafe fn copy_from<T2, O2>(self, src: BitPtr<Const, T2, O2>, count: usize)
where T2: BitStore, O2: BitOrder,

Copies count bits from the region starting at src to the region starting at self.

The regions are free to overlap; the implementation will detect overlap and correctly avoid it.

Note: this has the opposite argument order from ptr::copy: self is the destination, not the source.

§Original

pointer::copy_from

§Safety

See ptr::copy.

Source

pub unsafe fn copy_from_nonoverlapping<T2, O2>( self, src: BitPtr<Const, T2, O2>, count: usize, )
where T2: BitStore, O2: BitOrder,

Copies count bits from the region starting at src to the region starting at self.

Unlike .copy_from(), the two regions may not overlap; this method does not attempt to detect overlap and thus may have a slight performance boost over the overlap-handling .copy_from().

Note: this has the opposite argument order from ptr::copy_nonoverlapping: self is the destination, not the source.

§Original

pointer::copy_from_nonoverlapping

§Safety

See ptr::copy_nonoverlapping.

Source

pub fn drop_in_place(self)

👎Deprecated: this has no effect, and should not be called

Runs the destructor of the referent value.

bool has no destructor; this function does nothing.

§Original

pointer::drop_in_place

§Safety

See ptr::drop_in_place.

Source

pub unsafe fn write(self, value: bool)

Writes a new bit into the given location.

§Original

pointer::write

§Safety

See ptr::write.

Source

pub unsafe fn write_volatile(self, value: bool)

Writes a new bit using volatile I/O operations.

Because processors do not generally have single-bit read or write instructions, this must perform a volatile read of the entire memory location, perform the write locally, then perform another volatile write to the entire location. These three steps are guaranteed to be sequential with respect to each other, but are not guaranteed to be atomic.

Volatile operations are intended to act on I/O memory, and are only guaranteed not to be elided or reördered by the compiler across other I/O operations.

You should not use bitvec to act on volatile memory. You should use a crate specialized for volatile I/O work, such as voladdr, and use it to explicitly manage the I/O and ask it to perform bitvec work only on the local snapshot of a volatile location.

§Original

pointer::write_volatile

§Safety

See ptr::write_volatile.

Source

pub unsafe fn write_unaligned(self, value: bool)

👎Deprecated: BitPtr does not have unaligned addresses

Writes a bit into memory, tolerating unaligned addresses.

BitPtr does not have unaligned addresses. BitPtr itself is capable of operating on misaligned addresses, but elects to disallow use of them in keeping with the rest of bitvec’s requirements.

§Original

pointer::write_unaligned

§Safety

See ptr::write_unaligned.

Source

pub unsafe fn replace(self, value: bool) -> bool

Replaces the bit at *self with a new value, returning the previous value.

§Original

pointer::replace

§Safety

See ptr::replace.

Source

pub unsafe fn swap<T2, O2>(self, with: BitPtr<Mut, T2, O2>)
where T2: BitStore, O2: BitOrder,

Swaps the bits at two mutable locations.

§Original

pointer::swap

§Safety

See ptr::swap.

Trait Implementations§

Source§

impl<M, T, O> Clone for BitPtr<M, T, O>
where M: Mutability, T: BitStore, O: BitOrder,

Source§

fn clone(&self) -> Self

Returns a copy of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl<M, T, O> Debug for BitPtr<M, T, O>
where M: Mutability, T: BitStore, O: BitOrder,

Source§

fn fmt(&self, fmt: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl<T, O> From<&T> for BitPtr<Const, T, O>
where T: BitStore, O: BitOrder,

Source§

fn from(elem: &T) -> Self

Converts to this type from the input type.
Source§

impl<T, O> From<&mut T> for BitPtr<Mut, T, O>
where T: BitStore, O: BitOrder,

Source§

fn from(elem: &mut T) -> Self

Converts to this type from the input type.
Source§

impl<M, T, O> Hash for BitPtr<M, T, O>
where M: Mutability, T: BitStore, O: BitOrder,

Source§

fn hash<H>(&self, state: &mut H)
where H: Hasher,

Feeds this value into the given Hasher. Read more
1.3.0 · Source§

fn hash_slice<H>(data: &[Self], state: &mut H)
where H: Hasher, Self: Sized,

Feeds a slice of this type into the given Hasher. Read more
Source§

impl<M, T, O> Ord for BitPtr<M, T, O>
where M: Mutability, T: BitStore, O: BitOrder,

Source§

fn cmp(&self, other: &Self) -> Ordering

This method returns an Ordering between self and other. Read more
1.21.0 · Source§

fn max(self, other: Self) -> Self
where Self: Sized,

Compares and returns the maximum of two values. Read more
1.21.0 · Source§

fn min(self, other: Self) -> Self
where Self: Sized,

Compares and returns the minimum of two values. Read more
1.50.0 · Source§

fn clamp(self, min: Self, max: Self) -> Self
where Self: Sized,

Restrict a value to a certain interval. Read more
Source§

impl<M1, M2, T1, T2, O> PartialEq<BitPtr<M2, T2, O>> for BitPtr<M1, T1, O>
where M1: Mutability, M2: Mutability, T1: BitStore, T2: BitStore, O: BitOrder,

Source§

fn eq(&self, other: &BitPtr<M2, T2, O>) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl<M1, M2, T1, T2, O> PartialOrd<BitPtr<M2, T2, O>> for BitPtr<M1, T1, O>
where M1: Mutability, M2: Mutability, T1: BitStore, T2: BitStore, O: BitOrder,

Source§

fn partial_cmp(&self, other: &BitPtr<M2, T2, O>) -> Option<Ordering>

This method returns an ordering between self and other values if one exists. Read more
1.0.0 · Source§

fn lt(&self, other: &Rhs) -> bool

Tests less than (for self and other) and is used by the < operator. Read more
1.0.0 · Source§

fn le(&self, other: &Rhs) -> bool

Tests less than or equal to (for self and other) and is used by the <= operator. Read more
1.0.0 · Source§

fn gt(&self, other: &Rhs) -> bool

Tests greater than (for self and other) and is used by the > operator. Read more
1.0.0 · Source§

fn ge(&self, other: &Rhs) -> bool

Tests greater than or equal to (for self and other) and is used by the >= operator. Read more
Source§

impl<M, T, O> Pointer for BitPtr<M, T, O>
where M: Mutability, T: BitStore, O: BitOrder,

Source§

fn fmt(&self, fmt: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl<M, T, O> RangeBounds<BitPtr<M, T, O>> for BitPtrRange<M, T, O>
where M: Mutability, T: BitStore, O: BitOrder,

Source§

fn start_bound(&self) -> Bound<&BitPtr<M, T, O>>

Start index bound. Read more
Source§

fn end_bound(&self) -> Bound<&BitPtr<M, T, O>>

End index bound. Read more
1.35.0 · Source§

fn contains<U>(&self, item: &U) -> bool
where T: PartialOrd<U>, U: PartialOrd<T> + ?Sized,

Returns true if item is contained in the range. Read more
Source§

impl<T, O> TryFrom<*const T> for BitPtr<Const, T, O>
where T: BitStore, O: BitOrder,

Source§

type Error = BitPtrError<T>

The type returned in the event of a conversion error.
Source§

fn try_from(elem: *const T) -> Result<Self, Self::Error>

Performs the conversion.
Source§

impl<T, O> TryFrom<*mut T> for BitPtr<Mut, T, O>
where T: BitStore, O: BitOrder,

Source§

type Error = BitPtrError<T>

The type returned in the event of a conversion error.
Source§

fn try_from(elem: *mut T) -> Result<Self, Self::Error>

Performs the conversion.
Source§

impl<M, T, O> Copy for BitPtr<M, T, O>
where M: Mutability, T: BitStore, O: BitOrder,

Source§

impl<M, T, O> Eq for BitPtr<M, T, O>
where M: Mutability, T: BitStore, O: BitOrder,

Auto Trait Implementations§

§

impl<M, T, O> Freeze for BitPtr<M, T, O>
where M: Freeze,

§

impl<M, T, O> RefUnwindSafe for BitPtr<M, T, O>

§

impl<M = Const, T = usize, O = Lsb0> !Send for BitPtr<M, T, O>

§

impl<M = Const, T = usize, O = Lsb0> !Sync for BitPtr<M, T, O>

§

impl<M, T, O> Unpin for BitPtr<M, T, O>
where M: Unpin, O: Unpin,

§

impl<M, T, O> UnwindSafe for BitPtr<M, T, O>

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dst: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dst. Read more
Source§

impl<T> Conv for T

Source§

fn conv<T>(self) -> T
where Self: Into<T>,

Converts self into T using Into<T>. Read more
Source§

impl<T> FmtForward for T

Source§

fn fmt_binary(self) -> FmtBinary<Self>
where Self: Binary,

Causes self to use its Binary implementation when Debug-formatted.
Source§

fn fmt_display(self) -> FmtDisplay<Self>
where Self: Display,

Causes self to use its Display implementation when Debug-formatted.
Source§

fn fmt_lower_exp(self) -> FmtLowerExp<Self>
where Self: LowerExp,

Causes self to use its LowerExp implementation when Debug-formatted.
Source§

fn fmt_lower_hex(self) -> FmtLowerHex<Self>
where Self: LowerHex,

Causes self to use its LowerHex implementation when Debug-formatted.
Source§

fn fmt_octal(self) -> FmtOctal<Self>
where Self: Octal,

Causes self to use its Octal implementation when Debug-formatted.
Source§

fn fmt_pointer(self) -> FmtPointer<Self>
where Self: Pointer,

Causes self to use its Pointer implementation when Debug-formatted.
Source§

fn fmt_upper_exp(self) -> FmtUpperExp<Self>
where Self: UpperExp,

Causes self to use its UpperExp implementation when Debug-formatted.
Source§

fn fmt_upper_hex(self) -> FmtUpperHex<Self>
where Self: UpperHex,

Causes self to use its UpperHex implementation when Debug-formatted.
Source§

fn fmt_list(self) -> FmtList<Self>
where &'a Self: for<'a> IntoIterator,

Formats each item in a sequence. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> Pipe for T
where T: ?Sized,

Source§

fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> R
where Self: Sized,

Pipes by value. This is generally the method you want to use. Read more
Source§

fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> R
where R: 'a,

Borrows self and passes that borrow into the pipe function. Read more
Source§

fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> R
where R: 'a,

Mutably borrows self and passes that borrow into the pipe function. Read more
Source§

fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R
where Self: Borrow<B>, B: 'a + ?Sized, R: 'a,

Borrows self, then passes self.borrow() into the pipe function. Read more
Source§

fn pipe_borrow_mut<'a, B, R>( &'a mut self, func: impl FnOnce(&'a mut B) -> R, ) -> R
where Self: BorrowMut<B>, B: 'a + ?Sized, R: 'a,

Mutably borrows self, then passes self.borrow_mut() into the pipe function. Read more
Source§

fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R
where Self: AsRef<U>, U: 'a + ?Sized, R: 'a,

Borrows self, then passes self.as_ref() into the pipe function.
Source§

fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R
where Self: AsMut<U>, U: 'a + ?Sized, R: 'a,

Mutably borrows self, then passes self.as_mut() into the pipe function.
Source§

fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R
where Self: Deref<Target = T>, T: 'a + ?Sized, R: 'a,

Borrows self, then passes self.deref() into the pipe function.
Source§

fn pipe_deref_mut<'a, T, R>( &'a mut self, func: impl FnOnce(&'a mut T) -> R, ) -> R
where Self: DerefMut<Target = T> + Deref, T: 'a + ?Sized, R: 'a,

Mutably borrows self, then passes self.deref_mut() into the pipe function.
Source§

impl<T> Tap for T

Source§

fn tap(self, func: impl FnOnce(&Self)) -> Self

Immutable access to a value. Read more
Source§

fn tap_mut(self, func: impl FnOnce(&mut Self)) -> Self

Mutable access to a value. Read more
Source§

fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self
where Self: Borrow<B>, B: ?Sized,

Immutable access to the Borrow<B> of a value. Read more
Source§

fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self
where Self: BorrowMut<B>, B: ?Sized,

Mutable access to the BorrowMut<B> of a value. Read more
Source§

fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self
where Self: AsRef<R>, R: ?Sized,

Immutable access to the AsRef<R> view of a value. Read more
Source§

fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self
where Self: AsMut<R>, R: ?Sized,

Mutable access to the AsMut<R> view of a value. Read more
Source§

fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self
where Self: Deref<Target = T>, T: ?Sized,

Immutable access to the Deref::Target of a value. Read more
Source§

fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self
where Self: DerefMut<Target = T> + Deref, T: ?Sized,

Mutable access to the Deref::Target of a value. Read more
Source§

fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self

Calls .tap() only in debug builds, and is erased in release builds.
Source§

fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self

Calls .tap_mut() only in debug builds, and is erased in release builds.
Source§

fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self
where Self: Borrow<B>, B: ?Sized,

Calls .tap_borrow() only in debug builds, and is erased in release builds.
Source§

fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self
where Self: BorrowMut<B>, B: ?Sized,

Calls .tap_borrow_mut() only in debug builds, and is erased in release builds.
Source§

fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self
where Self: AsRef<R>, R: ?Sized,

Calls .tap_ref() only in debug builds, and is erased in release builds.
Source§

fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self
where Self: AsMut<R>, R: ?Sized,

Calls .tap_ref_mut() only in debug builds, and is erased in release builds.
Source§

fn tap_deref_dbg<T>(self, func: impl FnOnce(&T)) -> Self
where Self: Deref<Target = T>, T: ?Sized,

Calls .tap_deref() only in debug builds, and is erased in release builds.
Source§

fn tap_deref_mut_dbg<T>(self, func: impl FnOnce(&mut T)) -> Self
where Self: DerefMut<Target = T> + Deref, T: ?Sized,

Calls .tap_deref_mut() only in debug builds, and is erased in release builds.
Source§

impl<T> TryConv for T

Source§

fn try_conv<T>(self) -> Result<T, Self::Error>
where Self: TryInto<T>,

Attempts to convert self into T using TryInto<T>. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.

Layout§

Note: Unable to compute type layout, possibly due to this type having generic parameters. Layout can only be computed for concrete, fully-instantiated types.