← Back to all models rankings and main page
There are many benchmarks for LLMs. There are coding benchmarks such as SWE-Bench, and HumanEval. The first evaluates an LLMs ability to work on actual production codebases from popular repositories, yet it's Python-only and over 45% of the tasks are from Django. The second tests if an LLM can generate functional correctness from docstrings, which are easy to evaluate automatically and are widely adopted, but the test set is small, Python-only, and becoming saturated.
There are general knowledge and reasoning benchmarks, such as MMLU-Pro and ARC, each with its own pros and cons.
Every benchmark has its pros and cons, and no benchmark can capture everything - but there is something in the collective that surely must measure something.
I would propose that there is a capability that falls between the cracks: step-by-step symbolic manipulation under strict rules, where any error cascades into complete failure.
This matters because many real-world tasks require this kind of precision: applying type transformations in code refactoring, following API contracts exactly, executing multi-step data transformations, or any situation where "close enough" isn't good enough.
Current maths benchmarks focus on getting the right final answer almost by any means necessary. Current coding benchmarks accept any code that passes tests. Neither directly measures whether a model can reliably execute a multi-step algorithmic process while showing its work.
Additionally, static benchmarks face inevitable data contamination as models are trained on increasingly large corpora. We need a task where we can generate unlimited fresh test cases with algorithmic verification.
As I wanted to know how well an LLM can compute and manipulate symbols - because this is what I see coding as - I needed something that is just that. But I also wanted to run lots of tests, across lots of LLMs, and so needed something that is also easy to verify. We need some task that is almost pure symbolic manipulation with a definite answer easily checked. A coding challenge satisfies these requirements, in that you can check that the code generated actually compiles and runs. But how do you generate the problems from which the code needs to be written to then be compiled and run?
An analogous domain to our coding problem is mathematics. It contains symbolic manipulation, with well-structured rules, and if there is any error in the stepping process, you are thrown off and the final answer would likely be wrong.
This is what made me decide on the derivative.
The derivative is computationally simple and can be done with pure symbol manipulation.
It can also be done via computer algorithm.
So it is easy to check if two derivatives are the same.
The problem statement is simple: find the derivative of [expression] with respect to [variable].
Importantly, it’s easy to randomly generate expressions, and we can make them as tricky as we want, with whatever variables we want, so it’s incredibly unlikely that an LLM would ever memorise our dataset.
Even if we published the data set, we could just generate more.
But if we give an LLM a derivative to do, how do we know it's not "cheating" and using a computer to do the actual calculation? Well, we can just ask it to show its reasoning. By requiring the LLM to explain its step-by-step process, we can verify that it's actually applying calculus rules rather than calling out to a symbolic computation library.
Why does this matter beyond mathematics?
Symbolic manipulation ability is analogous to many practical programming tasks: refactoring code while preserving semantics, applying type transformations correctly, chaining API calls where each step depends on the previous result, or manipulating data structures through complex transformations.
If a model can't reliably apply the chain rule to $(1 + \sqrt{x})^3$, can it reliably compose three async functions where types must align?
The derivative benchmark tests whether models can execute multi-step transformations with zero tolerance for error—precisely the requirement for production code.
Of course, there is a lot of hand-waving in our justification for why the derivative, and we do not have the data to back it up rigorously. But if an LLM gets confused doing the humble derivative, why would it still be able to do other more complicated tasks?
Consider differentiating $y = (1 + \sqrt{x})^3$ w.r.t $x$. The answer is:
$$\dfrac{d y}{dx} = 3(1 + \sqrt{x})^2 \cdot \frac{1}{2}x^{-\frac{1}{2}} = \dfrac{3(1 + \sqrt{x})^2}{2 \sqrt{x}} = \dfrac{3}{2 \sqrt{x}} + 3 + \dfrac{3\sqrt{x}}{2}$$
Asking Gemini 2.5 Flash to calculate this derivative produces the following reasoning:
The given expression is `(1 + sqrt(x))**3`, and we need to differentiate it with respect to `x`. This is a composite function, so we will use the chain rule. Let `u = 1 + sqrt(x)`. Then the expression becomes `u^3`. The derivative of `u^3` with respect to `u` is `3u^2`. Now we need to find the derivative of `u = 1 + sqrt(x)` with respect to `x`. `sqrt(x)` can be written as `x^(1/2)`. The derivative of `1` with respect to `x` is `0`. The derivative of `x^(1/2)` with respect to `x` using the power rule is `(1/2)x^((1/2)-1) = (1/2)x^(-1/2)`. So, `du/dx = 0 + (1/2)x^(-1/2) = (1/2)x^(-1/2)`. Applying the chain rule, `d/dx[f(g(x))] = f'(g(x)) * g'(x)`: `d/dx[(1 + sqrt(x))^3] = 3 * (1 + sqrt(x))^2 * (1/2)x^(-1/2)`. Now, simplify the result: `3 * (1 + sqrt(x))^2 * (1/2)x^(-1/2)` `= (3/2) * (1 + sqrt(x))^2 * (1/sqrt(x))` `= (3 * (1 + sqrt(x))^2) / (2 * sqrt(x))` Let's expand the `(1 + sqrt(x))^2` term to see if further simplification is good. `(1 + sqrt(x))^2 = 1^2 + 2*1*sqrt(x) + (sqrt(x))^2 = 1 + 2sqrt(x) + x`. So the expression becomes:` `(3 * (1 + 2sqrt(x) + x)) / (2 * sqrt(x))` `= (3 + 6sqrt(x) + 3x) / (2 * sqrt(x))` We can also write this as:` `(3/2) * x^(-1/2) * (1 + 2x^(1/2) + x)` `= (3/2) * (x^(-1/2) + 2x^(1/2)x^(-1/2) + x*x^(-1/2))` `= (3/2) * (x^(-1/2) + 2x^(0) + x^(1/2))` `= (3/2) * (x^(-1/2) + 2 + x^(1/2))` Both forms are acceptable, but the form `(3*(1 + sqrt(x))^2) / (2*sqrt(x))` is more compact. Final check of the steps: 1. Identify `f(u) = u^3` and `u = 1 + sqrt(x)`. 2. `f'(u) = 3u^2`. 3. `du/dx = d/dx(1 + x^(1/2)) = 0 + (1/2)x^(-1/2)`. 4. Apply chain rule: `3(1 + sqrt(x))^2 * (1/2)x^(-1/2)`. 5. Simplify: `(3/2) * (1 + sqrt(x))^2 * (1/sqrt(x)) = (3 * (1 + sqrt(x))^2) / (2 * sqrt(x))`. The simplification is correct.
From which it reports the correct answer of $\dfrac{3 \cdot (1 + \sqrt{x})^2}{2 \sqrt{x}}$.
The reasoning from Claude Sonnet 4.5 on the same expression is the following:
I need to find the derivative of (1 + sqrt(x))^3 with respect to x. First, let me simplify the expression by expanding it before differentiating. This will make the differentiation more straightforward than using the chain rule on the composite form. Expanding (1 + sqrt(x))^3 using the binomial theorem: (1 + sqrt(x))^3 = 1^3 + 3(1^2)(sqrt(x)) + 3(1)(sqrt(x))^2 + (sqrt(x))^3 = 1 + 3·sqrt(x) + 3·x + x^(3/2) = 1 + 3·x^(1/2) + 3·x + x^(3/2) Now I'll differentiate each term with respect to x using the power rule: - d/dx[1] = 0 - d/dx[3·x^(1/2)] = 3·(1/2)·x^(1/2 - 1) = (3/2)·x^(-1/2) - d/dx[3·x] = 3·1·x^0 = 3 - d/dx[x^(3/2)] = (3/2)·x^(3/2 - 1) = (3/2)·x^(1/2) Combining all terms: d/dx[(1 + sqrt(x))^3] = 0 + (3/2)·x^(-1/2) + 3 + (3/2)·x^(1/2) Simplifying: = (3/2)·x^(-1/2) + 3 + (3/2)·x^(1/2) = (3/(2·sqrt(x))) + 3 + (3·sqrt(x))/2 Converting back to a more standard form with sqrt notation: = 3/(2·sqrt(x)) + 3 + (3·sqrt(x))/2
Producing the correct answer (in the alternative formatting) of $\dfrac{3}{2 \sqrt{x}} + 3 + \dfrac{3\sqrt{x}}{2}$.
Of course, not every model gets every answer right, all of the time.
For example, asking Claude Sonnet 4.5 to differentiate $\dfrac{1 + 16 t^2}{(4 t)^3}$ produces the following reasoning:
I need to find the derivative of (1 + 16*t^2)/(4*t)^3 with respect to t. First, let me simplify this expression before differentiating, as the instructions emphasize simplification first. Starting with: (1 + 16*t^2)/(4*t)^3 Simplify the denominator: (4*t)^3 = 4^3 * t^3 = 64*t^3 So the expression becomes: (1 + 16*t^2)/(64*t^3) Now I'll separate this into individual terms by dividing each term in the numerator by the denominator: = 1/(64*t^3) + 16*t^2/(64*t^3) = 1/(64*t^3) + 16/(64*t) = 1/(64*t^3) + 1/(4*t) Converting to negative exponents for easier differentiation: = (1/64)*t^(-3) + (1/4)*t^(-1) Now I can differentiate using the power rule: d/dt[t^n] = n*t^(n-1) For the first term: d/dt[(1/64)*t^(-3)] = (1/64)*(-3)*t^(-4) = -3/(64*t^4) For the second term: d/dt[(1/4)*t^(-1)] = (1/4)*(-1)*t^(-2) = -1/(4*t^2) Combining these: = -3/(64*t^4) - 1/(4*t^2) To express in standard form, I'll convert back and find a common denominator: = -3/(64*t^4) - 16/(64*t^4) = -19/(64*t^4) Or in exponent notation: -19/(64*t^4)
Claude Sonnet 4.5 correctly simplifies the expression:
$$\dfrac{1 + 16 t^2}{(4 t)^3} = \frac{1}{64} t^{-3} + \frac{1}{4} t^{-1}$$
It then correctly calculates the derivative of each of these terms, but then incorrectly combines these terms in the final line, producing the incorrect answer.
Having seen some examples, what is it that we are actually measuring? We are clearly not measuring any deep mathematical reasoning. The derivative has four "rules" (sum, product, quotient, and composite), and a bunch of shortcuts (derivative of $\sin$, $\log$, etc.) - it is not complicated.
What we are measuring is an LLM's ability to perform precise symbolic manipulation following algorithmic rules. While derivatives don't require deep mathematical insight, they do demand applying the right rule at the right time, correctly handling composition and nested structures, and maintaining consistency throughout multi-step computations. Any minor hallucination or inconsistency in this process produces the wrong answer. In this sense, it's analogous to coding: you need to manipulate symbols precisely according to well-defined rules, and even small errors cascade into failure. Where this mathematics differs from coding, is that this symbolic approach does not have a compiler or parser to tell you have gone wrong!
Checking if an answer is right is fairly easy, and we can safely assume that if an answer is right, then the right steps were followed.
However, while knowing if an answer is wrong is also easy, wrong is a very fuzzy metric.
As you can see from the above reasoning examples, most LLMs "compute" in LaTeX, and return their answers in the same (this was part of the prompt). But we do our validation in Python, so there are differences between `\sin{x}` and `sin(x)`.
We thus had to build a parser to convert between these formats, which was mostly easy, with some subtle issues. For example, there is a difference between the Unicode minus (U+2212) and ASCII hyphen (U+002D): − vs -. To a compiler this is obvious, but when you're a human visually checking in a web browser to figure out why an answer is "wrong" when it is clearly correct, it can cause some headaches.
LLMs are also very prone to adding surround text, which can be rather verbose: "The derivative of the function is [all the reasoning] ..." or short with "y' = ". In both instances, you may find that the derivative given is actually correct, but we cannot parse the expression, because it's not valid LaTeX.
Should we correct this? Maybe. We could, for example, chain the same model with a flow of `[do derivative] -> [extract just answer] -> [simplify latex] -> [validate answer]`, where each step is the LLM doing the work, and we can check at which stage it went wrong. But this is V1, and tokens are expensive, so that can come later.
We are not (and have zero desire to) manually checking answers. This benchmark requires an LLM to correctly produce a derivative and return an answer in a format that we can check algorithmically. If an LLM perfectly calculates the derivative, but can't format an answer, it will do badly on this benchmark.
Yes and no.
It's fair in that every model receives the same expressions and the same instructions. But it's also not fair in that we know each model requires its own "prompting strategy".
A clear example of this is how well Claude 3.7 Sonnet-Thinking does on the benchmark, but how badly Claude Sonnet 4.5 does. If we told Claude Sonnet 4.5 to "ultra think", it would probably do better. But that's a model-specific hack, which each model could probably benefit from. Perhaps the prompt itself favoring certain model families is part of the test - if a model can't follow standard instructions reliably, that's valuable information.
Again, we are not measuring mathematical reasoning. We are measuring symbolic manipulation. A model performing badly when doing the derivative does not make it a poor model.
Tokens remain expensive, and I am an individual with a budget. In an ideal world, I would have kept testing far beyond the current 60 odd tests, and made ever more complicated expressions until the accuracy of the top models was closer to 50%. Right now, the benchmark is saturated. So it is going to be difficult to consider newer models. Additionally, given how well the top models did, it's also impossible to figure out what any of the top models actually get wrong.
A lack of data means we cannot test, for example, the differences between "think step-by-step" and just giving an answer. We cannot correlate the number of output tokens with the likelihood of getting an answer wrong. Can a model correct itself if we tell it we cannot compile the answer? Do models get the answer wrong because it requires too many rules, has too many symbols, is too long, or too deep? Do the top models here also perform well on coding problems? Does published model size correlate with accuracy?
That being said, it is notable that models that are released by a provider that we are told are "smarter" than the previous version do actually do better - we can then assume that we are certainly measuring something.
It is also rather interesting that the models to be perceived to be better at coding are top of the list - which is precisely the confirmation bias I am looking for, so happy about that.
Honestly, I am rather surprised with how well the top providers actually did. Going in, I just assumed they would do a lot worse, and that there would be massive hallucinations causing the models to miss and drop symbols.
On the main listing, you will see that different models were tested a different number of times. We simply did this because we want to be cost-efficient. If a model has no (statistical) hope of being in the top five, then we have no desire to continue testing it. At the same time, we did not test any expensive models, preferring to test more of the "cheaper" models. Feel free to reach out if you have an API key I can use to change this.
We used Openrouter to access the models, so if Openrouter has a provider that degrades the results of a model, that will obviously impact the results here. The main calling of the model and extracting the "reasoning" and "answer" fields was done with DSPy. So if a model could not supply these fields, it was excluded, and if it gave an answer that contained invalid latex (either with extra text or a missing bracket) then we marked its response as an error.
Finally, we did not multi-shot the responses. If a model got the wrong answer, we did not try again.
The derivative benchmark succeeds not because it measures mathematical reasoning per se - it succeeds because it measures algorithmic precision applied to well-defined symbolic rules. This is a fundamental capability for many real-world applications, yet one that existing benchmarks largely overlook.
Our key insight is that symbolic precision is not binary but exists on a spectrum, and only the most advanced models consistently achieve high performance. This stratification provides valuable guidance for practitioners deciding which models to trust for precision-critical applications.
As models continue to improve, this benchmark will evolve. But the core contribution remains: we've established that symbolic manipulation precision can be measured efficiently, predicted statistically, and varies dramatically across models. These insights move us closer to the ultimate goal: building models we can trust to be correct, every single time.
The challenge isn't just building models that are smart - it's building models that are precise, and knowing which is which.
We have, I suspect, contributed a novel benchmark to the benchmark ecosystem. There has been previous work on testing an LLMs ability to do the derivative, but as far as I am aware, it has not been turned into any ranking of this scale. It would be interesting to test other symbolic methods that are largely simple like integration, solving standard differential equations, etc. But we currently do not have enough tokens for a valuable benchmark now, so further work is unlikely.
We had considered getting an LLM to compute the nth row of a cellular automate or run a turning machine, but settled on the derivative because we thought it cooler.
I hope that we have demonstrated that top models are actually really good at symbolic manipulation in some settings, and that there is value in doing more.
The following 50 expressions are the first ones used in the evaluation dataset. Each shows the derivative operation inline followed by the SymPy result.
| # | Expression and Result |
|---|---|
| 1 |
$\frac{d}{dy} \left[ y \left(\pi p + y - 6 + \log{\left(\frac{2}{3} \right)}\right) \log{\left(- y^{2} + \log{\left(y \right)} - \frac{62}{3} \right)} \right]$ = (3*(2*y**2 - 1)*(pi*p + y - 6 + log(2/3)) + (3*y**2 - 3*log(y) + 62)*(pi*p + 2*y - 6 + log(2/3))*log(-y**2 + log(y) - 62/3))/(3*y**2 - 3*log(y) + 62)
|
| 2 |
$\frac{d}{dx} \left[ \frac{\sin{\left(m x \right)}}{x} \right]$ = (m*x*cos(m*x) - sin(m*x))/x**2
|
| 3 |
$\frac{d}{dx} \left[ \left(n + 2 r \sin{\left(r x \right)}\right)^{p} \right]$ = 2*p*r**2*(n + 2*r*sin(r*x))**(p - 1)*cos(r*x)
|
| 4 |
$\frac{d}{dt} \left[ \cos^{2}{\left(t \right)} \right]$ = -sin(2*t)
|
| 5 |
$\frac{d}{dt} \left[ t^{3} x + t x^{2} \right]$ = x*(3*t**2 + x)
|
| 6 |
$\frac{d}{dy} \left[ - 7 y^{\frac{7}{2}} \left(p + 1\right) e^{e^{\frac{343}{64}}} \log{\left(10 y^{2} + 3 y \right)} \right]$ = 7*y**(5/2)*(p + 1)*(-40*y - 7*(10*y + 3)*log(y*(10*y + 3)) - 6)*exp(exp(343/64))/(2*(10*y + 3))
|
| 7 |
$\frac{d}{dx} \left[ \left(\sqrt{x} + 1\right)^{3} \right]$ = 3*(sqrt(x) + 1)**2/(2*sqrt(x))
|
| 8 |
$\frac{d}{dt} \left[ \frac{\cot{\left(t \right)}}{t^{2}} \right]$ = (-t/sin(t)**2 - 2*cot(t))/t**3
|
| 9 |
$\frac{d}{dx} \left[ - 5 x^{\frac{3}{4}} + 2 x \right]$ = 2 - 15/(4*x**(1/4))
|
| 10 |
$\frac{d}{dy} \left[ \frac{B}{A y^{3} + b} \right]$ = -3*A*B*y**2/(A*y**3 + b)**2
|
| 11 |
$\frac{d}{dt} \left[ 64 t^{4} + t \left(2 t - 2\right) \log{\left(t \right)} - 2 + \sin^{2}{\left(2 \right)} \right]$ = 256*t**3 + 4*t*log(t) + 2*t - 2*log(t) - 2
|
| 12 |
$\frac{d}{dx} \left[ \frac{1}{1 - \frac{1}{x}} \right]$ = -1/(x - 1)**2
|
| 13 |
$\frac{d}{du} \left[ 9 u \left(2 u - 6\right) + \sin{\left(\sqrt{u} \right)} + 20 \cos{\left(10 \right)} \right]$ = 36*u - 54 + cos(sqrt(u))/(2*sqrt(u))
|
| 14 |
$\frac{d}{dx} \left[ \sqrt{\frac{x}{x + 1}} \right]$ = sqrt(x/(x + 1))/(2*x*(x + 1))
|
| 15 |
$\frac{d}{dx} \left[ \sqrt[4]{x} + \frac{1}{\sqrt{x}} \right]$ = -1/(2*x**(3/2)) + 1/(4*x**(3/4))
|
| 16 |
$\frac{d}{dt} \left[ \frac{\sin{\left(t \right)}}{\tan{\left(t \right)} + 1} \right]$ = (-sin(t)/cos(t)**2 + sqrt(2)*sin(t + pi/4))/(tan(t) + 1)**2
|
| 17 |
$\frac{d}{dy} \left[ \frac{y^{6} \left(2 y - \sin{\left(\frac{3}{2} \right)} + 7\right)^{3} \log{\left(y \right)}^{3}}{8} \right]$ = 3*y**5*(2*y - sin(3/2) + 7)**2*(6*y*log(y) + 2*y - 2*log(y)*sin(3/2) + 14*log(y) - sin(3/2) + 7)*log(y)**2/8
|
| 18 |
$\frac{d}{dx} \left[ 10077696 \left(3 x + \frac{6}{5}\right)^{3} \sin^{3}{\left(7 \right)} + \cos^{2}{\left(x^{2} \right)} + \cos{\left(2 \right)} \right]$ = -2*x*sin(2*x**2) + 816293376*(5*x + 2)**2*sin(7)**3/25
|
| 19 |
$\frac{d}{dx} \left[ \sin{\left(2 x + \frac{7}{4} \right)} \right]$ = 2*cos(2*x + 7/4)
|
| 20 |
$\frac{d}{dx} \left[ \left(2 x^{3} - 5 x^{2} + 4\right)^{5} \right]$ = 10*x*(3*x - 5)*(2*x**3 - 5*x**2 + 4)**4
|
| 21 |
$\frac{d}{dx} \left[ x^{2} \sin{\left(\pi x \right)} \right]$ = x*(Pi*x*cos(Pi*x) + 2*sin(Pi*x))
|
| 22 |
$\frac{d}{dx} \left[ \tan{\left(x^{2} \right)} \right]$ = 2*x/cos(x**2)**2
|
| 23 |
$\frac{d}{dt} \left[ 4 t^{8} \left(2 t + \log{\left(6 t \right)}\right) \left(\sqrt{t} - t e^{t} \sin{\left(\frac{1}{2} \right)} + 97 t - \frac{64}{27}\right) e^{2 t} \cos{\left(\sqrt{t} + 10 \right)} \right]$ = 2*(-27*t**(15/2)*(2*t + log(6*t))*(2*sqrt(t)*(t*exp(t)*sin(1/2) + exp(t)*sin(1/2) - 97) - 1)*cos(sqrt(t) + 10) - t**(15/2)*(2*t + log(6*t))*(27*sqrt(t) - 27*t*exp(t)*sin(1/2) + 2619*t - 64)*sin(sqrt(t) + 10) + 4*t**8*(2*t + log(6*t))*(27*sqrt(t) - 27*t*exp(t)*sin(1/2) + 2619*t - 64)*cos(sqrt(t) + 10) + 2*t**7*(2*t + 1)*(27*sqrt(t) - 27*t*exp(t)*sin(1/2) + 2619*t - 64)*cos(sqrt(t) + 10) + 16*t**7*(2*t + log(6*t))*(27*sqrt(t) - 27*t*exp(t)*sin(1/2) + 2619*t - 64)*cos(sqrt(t) + 10))*exp(2*t)/27
|
| 24 |
$\frac{d}{dz} \left[ 27 z^{3} \left(- 3 z - 8 - \sin{\left(3 \right)}\right)^{3} \left(z + \sin{\left(z \right)} - 8\right)^{3} \cos^{3}{\left(3 z - 9 \right)} \right]$ = 81*z**2*(z + sin(z) - 8)**2*(3*z + sin(3) + 8)**2*(-z*(cos(z) + 1)*(3*z + sin(3) + 8)*cos(3*z - 9) + 3*z*(z + sin(z) - 8)*(3*z + sin(3) + 8)*sin(3*z - 9) - 3*z*(z + sin(z) - 8)*cos(3*z - 9) - (z + sin(z) - 8)*(3*z + sin(3) + 8)*cos(3*z - 9))*cos(3*z - 9)**2
|
| 25 |
$\frac{d}{dx} \left[ \log{\left(x^{3} \sqrt{- \sin{\left(6 \right)}} \left(x + 3\right) \left(x^{2} + \frac{x}{4} + e^{-5}\right) \right)} \right]$ = (24*x**3*exp(5) + 65*x**2*exp(5) + 16*x + 12*x*exp(5) + 36)/(x*(4*x**3*exp(5) + 13*x**2*exp(5) + 4*x + 3*x*exp(5) + 12))
|
| 26 |
$\frac{d}{du} \left[ \sqrt{2} \sqrt{u^{3} \left(\pi p + \log{\left(6 \right)}\right)^{2} e^{- 3 u} \log{\left(u \right)} \cos{\left(u \right)}} \right]$ = sqrt(2)*sqrt(u**3*(pi*p + log(6))**2*exp(-3*u)*log(u)*cos(u))*(-u*log(u)*tan(u) - 3*u*log(u) + 3*log(u) + 1)/(2*u*log(u))
|
| 27 |
$\frac{d}{dx} \left[ \left(x^{3} + x^{2}\right)^{4} \right]$ = x**7*(x + 1)**3*(12*x + 8)
|
| 28 |
$\frac{d}{dx} \left[ \sqrt{\cos{\left(x \right)}} \right]$ = -sin(x)/(2*sqrt(cos(x)))
|
| 29 |
$\frac{d}{dx} \left[ \frac{1}{\sqrt[3]{\sqrt{x} + x}} \right]$ = (-2*sqrt(x) - 1)/(6*sqrt(x)*(sqrt(x) + x)**(4/3))
|
| 30 |
$\frac{d}{dx} \left[ \left(- x^{2} + x\right) \left(2 x^{2} + 1\right) \right]$ = -8*x**3 + 6*x**2 - 2*x + 1
|
| 31 |
$\frac{d}{dt} \left[ \frac{\sin{\left(t \right)}}{\cos{\left(t \right)} + 1} \right]$ = 1/(cos(t) + 1)
|
| 32 |
$\frac{d}{dt} \left[ \frac{16 t^{2} + 1}{64 t^{3}} \right]$ = (-16*t**2 - 3)/(64*t**4)
|
| 33 |
$\frac{d}{dx} \left[ \frac{\cos{\left(x \right)}}{1 - \sin{\left(x \right)}} \right]$ = -1/(sin(x) - 1)
|
| 34 |
$\frac{d}{dt} \left[ \frac{3 - 2 t}{5 t + 1} \right]$ = -17/(5*t + 1)**2
|
| 35 |
$\frac{d}{dz} \left[ \frac{\sqrt{3} \sqrt{- \log{\left(z \right)}} \left(z + \cos{\left(2 z - \frac{28}{3} \right)}\right) \sin{\left(\cos{\left(z \right)} \right)}}{2} \right]$ = sqrt(3)*sqrt(-log(z))*(-2*z*((z + cos(2*z - 28/3))*sin(z)*cos(cos(z)) + (2*sin(2*z - 28/3) - 1)*sin(cos(z)))*log(z) + (z + cos(2*z - 28/3))*sin(cos(z)))/(4*z*log(z))
|
| 36 |
$\frac{d}{dt} \left[ \frac{5 t}{t^{3} - t - 1} \right]$ = -10*t**3/(-t**3 + t + 1)**2 - 5/(-t**3 + t + 1)**2
|
| 37 |
$\frac{d}{dx} \left[ \tan{\left(\sqrt{1 - x} \right)} \right]$ = -1/(2*sqrt(1 - x)*cos(sqrt(1 - x))**2)
|
| 38 |
$\frac{d}{dz} \left[ z \left(\sqrt{z} + z^{2} + 2 z + \left(3 z + 4\right)^{2} + \log{\left(z \right)} + \cos{\left(z \right)}\right) \sin{\left(- 10 z^{2} + z + \cos{\left(z \right)} + \frac{3}{2} \right)} \right]$ = (sqrt(z)*(-z*(20*z + sin(z) - 1)*cos(-10*z**2 + z + cos(z) + 3/2) + sin(-10*z**2 + z + cos(z) + 3/2))*(sqrt(z) + 10*z**2 + 26*z + log(z) + cos(z) + 16) + (z**(3/2)*(40*z - 2*sin(z) + 52) + 2*sqrt(z) + z)*sin(-10*z**2 + z + cos(z) + 3/2)/2)/sqrt(z)
|
| 39 |
$\frac{d}{du} \left[ p \sinh{\left(2 \sqrt{2} \right)} + u^{2} + u + e^{80 u - 40} - 9 + e^{- \frac{33}{10}} \right]$ = 2*u + 80*exp(80*u - 40) + 1
|
| 40 |
$\frac{d}{dx} \left[ \sqrt{x + \sqrt{\sqrt{x} + x}} \right]$ = (4*sqrt(x)*sqrt(sqrt(x) + x) + 2*sqrt(x) + 1)/(8*sqrt(x)*sqrt(sqrt(x) + x)*sqrt(x + sqrt(sqrt(x) + x)))
|
| 41 |
$\frac{d}{dx} \left[ \left(- 25 x^{2} + 2 x\right)^{3} + e^{x^{3}} + \log{\left(x + \sin{\left(x \right)} - 7 \right)} \right]$ = (3*x**2*((2 - 50*x)*(25*x - 2)**2 + exp(x**3))*(x + sin(x) - 7) + cos(x) + 1)/(x + sin(x) - 7)
|
| 42 |
$\frac{d}{du} \left[ \frac{u + 1}{4 u - 1} \right]$ = -5/(4*u - 1)**2
|
| 43 |
$\frac{d}{dt} \left[ \frac{5 t^{2}}{2} + 6 t \right]$ = 5*t + 6
|
| 44 |
$\frac{d}{dx} \left[ \frac{1}{3 - x} \right]$ = (x - 3)**(-2)
|
| 45 |
$\frac{d}{dw} \left[ \left(\frac{1}{w} - \frac{1}{w^{4}}\right) \left(w^{2} + 3 w\right) \right]$ = (w**4 + 2*w + 9)/w**4
|
| 46 |
$\frac{d}{dx} \left[ \frac{x}{2 - \tan{\left(x \right)}} \right]$ = (x/cos(x)**2 - tan(x) + 2)/(tan(x) - 2)**2
|
| 47 |
$\frac{d}{dr} \left[ \frac{\left(r^{2} - 1\right)^{3}}{\left(2 r + 1\right)^{5}} \right]$ = (r**2 - 1)**2*(-10*r**2 + 6*r*(2*r + 1) + 10)/(2*r + 1)**6
|
| 48 |
$\frac{d}{dx} \left[ \sqrt[5]{x \tan{\left(x \right)}} \right]$ = (x*tan(x))**(1/5)*(2*x/sin(2*x) + 1)/(5*x)
|
| 49 |
$\frac{d}{dt} \left[ \left(t + 1\right)^{\frac{2}{3}} \left(2 t^{2} - 1\right)^{3} \right]$ = 2*(2*t**2 - 1)**2*(2*t**2 + 18*t*(t + 1) - 1)/(3*(t + 1)**(1/3))
|
| 50 |
$\frac{d}{dt} \left[ \tan{\left(t \right)} \sec{\left(t \right)} \right]$ = (2*tan(t)**2 + 1)*sec(t)
|