Computing desk | ||
---|---|---|
< November 10 | << Oct | November | Dec >> | November 12 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
Whenever a new version of a software product comes out, I often find that I and my colleagues end up wasting a lot of time trying to work out how to do things we could easily and quickly do in the old version. Things that now require a different procedure, or which are accessed from a different menu, or now have a different name, or features/functions that turn out to have been removed, or now behave differently. This tends to be worst with things like new versions of operating systems (especially upgrading from Windows 7), or major interface redesigns (e.g. the introduction of the ribbon menus in Office 2007), but occurs with most software from most companies. (And that's when the new version works correctly, not counting new bugs, or compatibility problems). It occurred to me that this problem, multiplied across all business and across the world, must result in an absolutely staggering amount of lost time (and hence money). Has any analysis been done to actually quantify this? (Google searches have turned up various articles about time lost due to more specific problems like slow infrastructure, or excessive e-mails, or software bugs generally, but I can't find anything specifically about time lost due to "working as intended" changes). Iapetus ( talk) 11:56, 11 November 2021 (UTC)
I have the following two functions, both of which are inverses of eachother:
double sigmoid(double value) {
return 1 / (1 + exp(-value));
}
double logit(double value) {
return log(value / (1 - value));
}
Unfortunately for all values larger than say 36 or so logit(sigmoid(x)) simply returns "INF". Is there any way to prevent such an extreme loss of precision? Earl of Arundel ( talk) 13:25, 11 November 2021 (UTC)
double squash(double value) {
return value / (value + 1);
}
double unsquash(double value) {
return -(value / (value - 1));
}
Earl of Arundel ( talk) 14:14, 12 November 2021 (UTC)
sigmoid(36.8)
and sigmoid(36.9)
evaluate to the same value, namely 1, no function can undo sigmoid
for arguments that high. Let M be the highest argument in which you are interested. You want some monotonic function σ for which σ(x) → 1 as x → ∞ to have an inverse λ with implementations for which λ(σ(x)) ≈ x provided that abs(x)) ≤ M. This can only be possible if σ(M) < 1 by a non-negligible amount. --
Lambiam
22:43, 11 November 2021 (UTC)
So the squash/unsquash set of functions don't really work very well with negative numbers. Luckily the problem is easily fixed. In cases where the input is negative we basically just need to reflect then invert the output of the function.
double squash(double value) {
double sign = value < 0 ? -1 : 1;
return value / (sign * value + 1);
}
double unsquash(double value) {
double sign = value < 0 ? -1 : 1;
return -(value / (sign * value - 1));
}
The result can be seen in the following graph.
In blue is the standard sigmoid. The so-called squash function is defined by the orange curve for all negative , the red one for all positive values. The result is a very nice smooth curve across the interval {-1, 1}! Earl of Arundel ( talk) 17:23, 12 November 2021 (UTC)
double squish(double x) {
return 1 / (1 - x + sqrt(1 + x * x));
}
double unsquish(double y) {
return (2 * y - 1) / (2 * y * (1 - y));
}
What an elegant solution! Just absolutely beautiful to be quite honest. But if I may I ask, why it is so important to satisfy the criterion? Earl of Arundel ( talk) 22:00, 12 November 2021 (UTC)
double lambiam_contract(double (*function)(double), double value) {
double rev = function(-value);
return 1 / (1 + (rev + sqrt(1 + rev * rev)));
}
// Eg. using the sinh function
double sinh_contract(double value) {
return lambiam_contract(sinh, value);
}
Computing desk | ||
---|---|---|
< November 10 | << Oct | November | Dec >> | November 12 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
Whenever a new version of a software product comes out, I often find that I and my colleagues end up wasting a lot of time trying to work out how to do things we could easily and quickly do in the old version. Things that now require a different procedure, or which are accessed from a different menu, or now have a different name, or features/functions that turn out to have been removed, or now behave differently. This tends to be worst with things like new versions of operating systems (especially upgrading from Windows 7), or major interface redesigns (e.g. the introduction of the ribbon menus in Office 2007), but occurs with most software from most companies. (And that's when the new version works correctly, not counting new bugs, or compatibility problems). It occurred to me that this problem, multiplied across all business and across the world, must result in an absolutely staggering amount of lost time (and hence money). Has any analysis been done to actually quantify this? (Google searches have turned up various articles about time lost due to more specific problems like slow infrastructure, or excessive e-mails, or software bugs generally, but I can't find anything specifically about time lost due to "working as intended" changes). Iapetus ( talk) 11:56, 11 November 2021 (UTC)
I have the following two functions, both of which are inverses of eachother:
double sigmoid(double value) {
return 1 / (1 + exp(-value));
}
double logit(double value) {
return log(value / (1 - value));
}
Unfortunately for all values larger than say 36 or so logit(sigmoid(x)) simply returns "INF". Is there any way to prevent such an extreme loss of precision? Earl of Arundel ( talk) 13:25, 11 November 2021 (UTC)
double squash(double value) {
return value / (value + 1);
}
double unsquash(double value) {
return -(value / (value - 1));
}
Earl of Arundel ( talk) 14:14, 12 November 2021 (UTC)
sigmoid(36.8)
and sigmoid(36.9)
evaluate to the same value, namely 1, no function can undo sigmoid
for arguments that high. Let M be the highest argument in which you are interested. You want some monotonic function σ for which σ(x) → 1 as x → ∞ to have an inverse λ with implementations for which λ(σ(x)) ≈ x provided that abs(x)) ≤ M. This can only be possible if σ(M) < 1 by a non-negligible amount. --
Lambiam
22:43, 11 November 2021 (UTC)
So the squash/unsquash set of functions don't really work very well with negative numbers. Luckily the problem is easily fixed. In cases where the input is negative we basically just need to reflect then invert the output of the function.
double squash(double value) {
double sign = value < 0 ? -1 : 1;
return value / (sign * value + 1);
}
double unsquash(double value) {
double sign = value < 0 ? -1 : 1;
return -(value / (sign * value - 1));
}
The result can be seen in the following graph.
In blue is the standard sigmoid. The so-called squash function is defined by the orange curve for all negative , the red one for all positive values. The result is a very nice smooth curve across the interval {-1, 1}! Earl of Arundel ( talk) 17:23, 12 November 2021 (UTC)
double squish(double x) {
return 1 / (1 - x + sqrt(1 + x * x));
}
double unsquish(double y) {
return (2 * y - 1) / (2 * y * (1 - y));
}
What an elegant solution! Just absolutely beautiful to be quite honest. But if I may I ask, why it is so important to satisfy the criterion? Earl of Arundel ( talk) 22:00, 12 November 2021 (UTC)
double lambiam_contract(double (*function)(double), double value) {
double rev = function(-value);
return 1 / (1 + (rev + sqrt(1 + rev * rev)));
}
// Eg. using the sinh function
double sinh_contract(double value) {
return lambiam_contract(sinh, value);
}