Books

Modern Computational Finance

Modern Computational Finance: AAD and Parallel Simulations, published with Wiley, is out November 13, 2018 (ebook) and November 20 (hardcover) on Amazon, Apple Books and many other places including your favorite book store.

It covers the principles, professional implementation and interaction of three of the key technologies powering modern derivatives systems: parallel computing, Monte-Carlo simulations and automatic adjoint differentiation (AAD), a powerful algorithm that computes thousands of differentials with outstanding speed and accuracy. It is the combination of these technologies, among others, that earned Danske Bank the In-House System of the Year 2015 Risk award.

Reviews

“It would not be much of an exaggeration to say that Antoine Savine’s book ranks as the 21st century peer to Merton’s ‘Continuous-Time Finance’:

It makes modern computational techniques such as multi-threaded parallel AAD as accessible to finance professionals as Merton’s introduction of stochastic calculus into finance. A first in a three book series authored by Danske Bank’s powerhouse quant team makes intricate concepts inherent to production-quality implementation of AAD easy to understand and follow through.

No other quant finance focused book has gone so deeply into parallel C++ and AAD with such clarity, level of detail and thoroughness. I can hardly wait for the remaining two volumes to see what else the wizards of AAD have up their sleeves.”

Vladimir V. Piterbarg,
Head of Quantitative Analytics and Development at NatWest Markets,
Co-author of the three-volume set “Interest Rate Modeling”

“A passion to instruct
A knack for clarity
An obsession with detail
A luminous writer
An instant classic.”

Bruno Dupire,
Head of Quantitative Research, Bloomberg L.P.

Preview

TOC and Leif Andersen’s preface

please refresh page if you can’t see the sample

Sobol sequences explained

Section 5.4 provides a self-contained introduction to Sobol sampling, a sharp imoprovement over random sampling in the context of pricing by Monte-Carlo. Sobol sampling is considered a best practice in finance since the pioneering work of Jaeckel and Joe and Kuo in the early 2000s, yet it remains widely misunderstood.

please refresh page if you can’t see the sample

Check-pointing differentials

Chapter 13 discusses the check-pointing technique, critical for the practical implermentation of AAD. This is not self contained and probably hard to read in isolation from the rest of part III. It is only shown here to give a taste of the general style of the book.

please refresh page if you can’t see the sample

Repository

The book comes with complete, professional C++ code for generic, parallel Monte-Carlo simulations and AAD. The code is freely available on GitHub. It is advised to watch the repo and follow the author to be notified of updates and improvement.

The related lecture material has its own GitHub repo with a more gentle introduction to AAD in Machine Learning and Finance. See Lectures by Antoine Savine.

Forum

Please leave your own reviews, along questions, comments and suggestions at the bottom of this page, or on the book’s page on GoodReads.

Exercises and assignments

Exercises and assignments are being produced and will be posted separately. In the meantime, interested readers will find below the final hand-in for the computational finance lecture of autumn 2018 at Copenhagen University, where the book is used as curriculum:

https://antoinesavine.files.wordpress.com/2019/01/compfinhandin.pdf

Scripting

download preview now

A draft of the second volume of Modern Computational Finance, co-authored by Jesper Andreasen and dedicated to cash-flow scripting, is now complimentary available for preview:

We ask advanced readers to leave us their comments and suggestions on the dedicated thread on linkedIn to help us complete this work in the manner this critical technology deserves. You must be a member of the group Machine Learning in Quantitative Finance. Please request membership in case you are not a member yet.

book-1

out now on Amazon
book2-1

coming soon

13 thoughts on “Books

  1. Hi Antoine,
    On page 20 of your book, there is a small bug in the code at the bottom of the page. I’m not aware of any errata page on Wiley’s site so I’ll just report it in case you are not aware. Instead of

    size_t r = b.rows();

    it should be

    size_t r = b.cols();

    Also, some users of VS 2019 might encounter a build error with MatrixProduct.sln when OpenMP is turned on

    1>c1xx : error C2338: two-phase name lookup is not supported for C++/CLI, C++/CX, or OpenMP; use /Zc:twoPhase-
    1>c1xx : fatal error C1903: unable to recover from previous error(s); stopping compilation

    To fix this, one needs to add /Zc:twoPhase- to the C/C++, Command Line, Additional Options

    Best,

    Like

    1. Thank you, indeed, it appears that the code snippet in the bottom of page 20 is flawed, thank you so much for pointing it out. This snippet is not part of the ‘official’ code in GitHub, and so the bug spent a year unnoticed.

      Commenting this page is a good way to post errata, or you can contact me directly on aadcentral@asavine.com.

      OpenMP conflicting with two-phase lookup seems to be a known flaw in VS2019. Thank you for giving the workaround.

      Kind regards,
      Antoine

      Like

  2. You’re most welcome ! Perhaps the use of non square matrices if possible, in future example solutions will make such indexing bugs easier to spot.

    Best,

    Like

  3. Hi,
    On page 34, in the code snippet at the top, do you mean to say

    vector inner;
    // …
    return inner;

    rather than

    vector inner;
    // …
    return result;

    Like

      1. Much thanks! I am starting reading that book. Looking forward to the hard copy. Will definitely buy one when it comes out~

        Like

  4. The first book is fantastic, and I’m super happy to hear that the second on scripting and XVA is coming out soon.

    I know that you’ve standardized on C++, and that this is the standard in the industry, but have you taken a look at Julia at all? I ask for two reasons:

    1) Julia has a significant amount of support for AD (ForwardDiff, ReverseDiff, Zygote, etc.), including forward-mode AD that seems to be somewhat useful in certain Jacobian scenarios.

    2) Julia has significant metaprogramming capabilities and is, like LISP, homoiconic, which would seem to be a natural benefit when it comes to scripting.

    Like

    1. Hello, thank you for your kind words.

      The second book is on hold for a while because I have been working on differential machine learning, a really nice and novel manner to learn ‘analytic’ approximations on the fly with vast application to resolve the hottest problems in derivatives finance right now. I think you will like it, please have a look.

      Working paper: arxiv.org/abs/2005.02347

      Slides: http://www.deep-analytics.org

      GitHub: github.com/differential-machine-learning

      I don’t know anything about Julia.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s