DD: Double-Double Extended Precision
IEEE double provides 53 bits of significand (~15.9 decimal digits). For many scientific computations — ill-conditioned linear systems, long-running simulations, Newton’s method for high-degree polynomials — this is not enough. But going to quad precision (128-bit) requires either rare hardware support or slow arbitrary-precision libraries.
Double-double (DD) achieves ~106 bits of significand (~31.4 decimal digits) using two ordinary doubles. The trick: the value is stored as an unevaluated sum hi + lo, where hi carries the high-order bits and lo carries the residual. Arithmetic uses error-free transformations (Dekker’s algorithm, Knuth’s two-sum) to maintain the invariant that lo captures exactly the rounding error of hi. The result is double the precision of double, using only standard double-precision hardware — no special instructions, no library dependencies.
dd is a fixed-format extended-precision type:
| Property | Value |
|---|---|
| Total bits | 128 (2 × 64-bit doubles) |
| Sign | 1 bit |
| Exponent | 11 bits (same as double) |
| Significand | ~106 bits (~31.4 decimal digits) |
| Dynamic range | Same as double (~10^-308 to ~10^308) |
| Epsilon | ~4.93 × 10^-32 |
Key Properties
Section titled “Key Properties”- Not a template: fixed format using two
doublevalues - ~106 bits of precision: double the precision of
double - Same dynamic range as double: 11-bit exponent, ~10^308
- Software-only: runs on any hardware with IEEE double
- Error-free transformations: no precision loss in intermediate computations
- Supports infinity, NaN, subnormals: full IEEE-754 special value semantics
Internal Representation
Section titled “Internal Representation”class dd { double hi; // High-order component (carries most significant bits) double lo; // Low-order component (captures rounding error of hi) // Invariant: |lo| <= 0.5 * ulp(hi)};The mathematical value is hi + lo, but the two components are never actually added together (that would lose precision). Instead, they are maintained as an unevaluated sum.
How It Works
Section titled “How It Works”The core algorithms are error-free transformations:
Two-Sum (Knuth): computes a + b = s + e where s is the floating-point sum and e is the exact rounding error:
s = a + bv = s - ae = (a - (s - v)) + (b - v)Two-Product (Dekker): computes a * b = p + e where p is the floating-point product and e is the exact rounding error (requires FMA or Dekker splitting).
DD arithmetic chains these transformations:
- Compute
hiof the result using standard double arithmetic - Compute the exact rounding error of that operation
- Add the rounding error to
lo(along with thelocomponents of the operands) - Renormalize the
(hi, lo)pair
How to Use It
Section titled “How to Use It”Include
Section titled “Include”#include <universal/number/dd/dd.hpp>using namespace sw::universal;Basic Usage
Section titled “Basic Usage”dd a("3.14159265358979323846264338327950"); // 32 digits of pidd b("2.71828182845904523536028747135266"); // 32 digits of e
dd product = a * b;std::cout << std::setprecision(32) << product << std::endl;// Accurate to ~31 decimal digitsSolving Ill-Conditioned Systems
Section titled “Solving Ill-Conditioned Systems”template<typename Real>Real hilbert_condition(int n) { // Hilbert matrix has condition number ~10^(3.5n) // For n=5, condition number ~10^17, exceeding double precision Real sum(0); for (int i = 1; i <= n; ++i) { for (int j = 1; j <= n; ++j) { sum += Real(1) / Real(i + j - 1); } } return sum;}
// double loses all precision for large Hilbert matricesdouble d_result = hilbert_condition<double>(10);
// dd maintains ~31 digits, enough for moderately ill-conditioned problemsdd dd_result = hilbert_condition<dd>(10);Newton’s Method with Extended Precision
Section titled “Newton’s Method with Extended Precision”// Finding roots of high-degree polynomialsdd x(1.5); // Initial guessfor (int i = 0; i < 50; ++i) { dd fx = x * x * x - dd(2); // f(x) = x^3 - 2 dd fpx = dd(3) * x * x; // f'(x) = 3x^2 x = x - fx / fpx;}// x converges to cube_root(2) with ~31 digits of accuracyPrecision Progression
Section titled “Precision Progression”// Same computation at three precision levelsfloat f_sum = 0.0f;double d_sum = 0.0;dd dd_sum(0.0);
for (int i = 1; i <= 1000000; ++i) { float term_f = 1.0f / (float)i; double term_d = 1.0 / (double)i; dd term_dd = dd(1) / dd(i);
f_sum += term_f; // ~7 digits d_sum += term_d; // ~15 digits dd_sum += term_dd; // ~31 digits}Problems It Solves
Section titled “Problems It Solves”| Problem | How DD Solves It |
|---|---|
| Double precision (15 digits) is insufficient | DD provides ~31 decimal digits |
| Quad precision requires special hardware or slow libraries | DD uses standard double hardware, no dependencies |
| Ill-conditioned systems lose all significant digits | Extra precision preserves meaningful digits |
| Newton’s method stalls at double-precision accuracy | DD allows convergence to 31-digit accuracy |
| Error-free residual computation for iterative refinement | Error-free transformations capture exact rounding errors |
| Need extended precision but can’t afford arbitrary-precision | DD is ~10-20x cost of double, not 100-1000x like MPFR |
| Cross-platform reproducibility at extended precision | Pure software, same result on every IEEE-754 platform |