Standard casts from BigInteger to fixed-size integer all throw OverflowException if the number is too large (and unchecked doesn't work).
How can I get unchecked-like conversion behavior?
Standard casts from BigInteger to fixed-size integer all throw OverflowException if the number is too large (and unchecked doesn't work).
How can I get unchecked-like conversion behavior?
You can truncate a BigInteger by masking off the higher bits. For example, given a BigInteger big:
uint with (uint)(big & uint.MaxValue)ulong with (ulong)(big & ulong.MaxValue)int with unchecked((int)(uint)(big & uint.MaxValue))long with unchecked((long)(ulong)(big & ulong.MaxValue))This technique is not efficient; it requires two memory allocations (one to convert e.g. uint.MaxValue to BigInteger, and another to hold the result of the & operation). Memory allocations are avoided, however, when the mask is smaller than 32 bits (e.g. converting to ushort with (ushort)(big & ushort.MaxValue) should not allocate memory) because BigInteger does not allocate memory for numbers that are 31 bits or smaller.
Since introduction of Generic Math in .NET 7 you can simply use int.CreateTruncating:
var truncating = int.CreateTruncating(BigInteger.One);
Which is also the fastest option if I've modified the benchmark created by @SunsetQuest correctly - https://dotnetfiddle.net/nPCL3P :
| Bits | Mask & Cast | ToByteArray->BitConv | GenericMath | Compile |
|---|---|---|---|---|
| 32 | 434 | 174 | 38 | 96 |
| 64 | 317 | 202 | 43 | 119 |
| 128 | 346 | 199 | 37 | 98 |
| 256 | 443 | 276 | 37 | 95 |
| 512 | 510 | 431 | 37 | 98 |
| 1024 | 821 | 766 | 46 | 96 |
| 2048 | 1,738 | 1,368 | 37 | 96 |
I will try migrating all options to BenchmarkDotNet in the next few days to validate.
If you are interested how does it work internally - you can check out the source code, which as far as I can see does basically the same as SunsetQuest but leverages the ability to access the internal bits array without extra steps.
Also please check out:
UPD
Results of running BenchmarkDotNet with additional one - using UnsafeAccessorAttribute resulted in the following for me:
BenchmarkDotNet v0.15.0, Linux Ubuntu Intel Core i7-9750H CPU 2.60GHz,
| Method | Mean | Error | StdDev | Gen0 | Allocated |
|---|---|---|---|---|---|
| MaskToInt | 24,974.22 ns | 1,034.631 ns | 2,968.552 ns | 0.1221 | 832 B |
| ToInt | 45,567.48 ns | 1,279.287 ns | 3,711.443 ns | 5.3101 | 33432 B |
| ToInt32Truncate | 6,839.48 ns | 136.880 ns | 348.402 ns | 0.2823 | 1776 B |
| ToInt32TruncateCompiled | 139.22 ns | 4.093 ns | 11.808 ns | - | - |
| ToInt32TruncateUnsafeAccessor | 21.79 ns | 0.772 ns | 2.251 ns | - | - |
| GenericMath | 86.59 ns | 1.921 ns | 5.663 ns | - | - |
The full code for benchmark is present @github
The results are a bit interesting - using unsafe accessors for some reason is the fastest (I've tried to use unsafe accessors instead of compiled lambdas in BigIntegerCompiledAccess by SunsetQuest). Will try to validate later:
public static class BigIntegerExtensionsUnsafeAccessor
{
// internal readonly int _sign; // Do not rename (binary serialization)
[UnsafeAccessor(UnsafeAccessorKind.Field, Name = "_sign")]
extern static ref int GetSetSign(ref BigInteger c);
// internal readonly uint[]? _bits; // Do not rename (binary serialization)
[UnsafeAccessor(UnsafeAccessorKind.Field, Name = "_bits")]
extern static ref uint[]? GetSetBits(ref BigInteger c);
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static uint ToUInt32TruncateUnsafeAccessor(this BigInteger value)
{
if (value.IsZero) return 0u;
int sign = GetSetSign(ref value);
uint[]? magnitude = GetSetBits(ref value);
if (magnitude is null)
{
if (sign >= 0) return (uint)sign;
unchecked
{
uint absSign = (uint)(-sign);
return (uint)(~absSign + 1);
}
}
uint lowWord = magnitude.Length > 0 ? magnitude[0] : 0u;
return sign > 0 ? lowWord : unchecked((uint)(~lowWord + 1));
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static int ToInt32TruncateUnsafeAccessor(this BigInteger value)
{
return unchecked((int)value.ToUInt32TruncateUnsafeAccessor());
}
}
CreateTruncating is clearly the best" totally agree. Though this option is available since .NET 7 only, so there are cases when it can't be used as far as I can see. As for performance I think it should be the best too, but it requires a bit more time for me to check why the unsafe accessor is faster in the bench.Update 6/14/2025: Use uint.CreateTruncating() from @Guru Stron's answer. Better performance then the below and simple built in function.
If you are good with reflection, here is another choice. Massive 1000x speed up on very large numbers. Also surprised to see InBetween's BitConverter.ToInt32 was much faster than the mask and cast.
Benchmarks: Release mode, no debugger attached, C#9, Ryzen 7950x3d, 5M iterations measured in ms.
Bits | Mask & Cast | ToByteArray->BitConv | Reflection | Compile
32 | 363 | 46 | 48 | 18
64 | 134 | 54 | 55 | 18
128 | 141 | 64 | 66 | 18
256 | 212 | 86 | 155 | 18
512 | 243 | 134 | 204 | 19
1024 | 250 | 256 | 217 | 18
2048 | 676 | 473 | 226 | 18
4096 | 912 | 920 | 255 | 18
8192 | 1453 | 1873 | 320 | 18
16384 | 2510 | 3735 | 446 | 18
32768 | 4672 | 7546 | 715 | 19
65536 | 8952 | 14271 | 1172 | 18
full benchmark code: https://dotnetfiddle.net/mFAQqn
The Extension:
public static class BigIntegerCompiledAccess
{
public static readonly Func<BigInteger, int> GetSign;
public static readonly Func<BigInteger, uint[]?> GetBits;
static BigIntegerCompiledAccess()
{
// Expression: (BigInteger x) => x._sign
var param = Expression.Parameter(typeof(BigInteger), "x");
var signField = typeof(BigInteger)
.GetField("_sign", BindingFlags.NonPublic | BindingFlags.Instance)
?? throw new InvalidOperationException("Field '_sign' not found");
var bitsField = typeof(BigInteger)
.GetField("_bits", BindingFlags.NonPublic | BindingFlags.Instance)
?? throw new InvalidOperationException("Field '_bits' not found");
GetSign = Expression.Lambda<Func<BigInteger, int>>(
Expression.Field(param, signField), param).Compile();
GetBits = Expression.Lambda<Func<BigInteger, uint[]?>>(
Expression.Field(param, bitsField), param).Compile();
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static uint ToUInt32TruncateCompiled(this BigInteger value)
{
if (value.IsZero) return 0u;
int sign = BigIntegerCompiledAccess.GetSign(value);
uint[]? magnitude = BigIntegerCompiledAccess.GetBits(value);
if (magnitude is null)
{
if (sign >= 0) return (uint)sign;
unchecked
{
uint absSign = (uint)(-sign);
return (uint)(~absSign + 1);
}
}
uint lowWord = magnitude.Length > 0 ? magnitude[0] : 0u;
return sign > 0 ? lowWord : unchecked((uint)(~lowWord + 1));
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static int ToInt32TruncateCompiled(this BigInteger value)
{
return unchecked((int)value.ToUInt32TruncateCompiled());
}
}
int.CreateTruncating seems to be the fastest option - please check out my answer - stackoverflow.com/a/79656564/2501279ToUInt32Truncate from the fiddle seems to have a bug in it for small BigInteger values 2) using UnsafeAccessorAttribute to get the reflection data resulted in the best performance for me (but was not reproduced by your benchmark), will try to validate later.A faster approach:
public static int ToInt(this BigInteger big) {
return BitConverter.ToInt32(big.ToByteArray().AsSpan(0, 4)); }
Solutions for long, uint, etc. are straightforward.
BigInteger is 100 KB in size, for example, 100 KB are allocated and filled.
int.