Welcome!
Hello, NickyMcLean, and welcome to Wikipedia! Thank you for your contributions. I hope you like the place and decide to stay. Here are a few good links for newcomers:
I hope you enjoy editing here and being a
Wikipedian! Please
sign your name on talk pages using four tildes (~~~~); this will automatically produce your name and the date. If you need help, check out
Wikipedia:Questions, ask me on my talk page, or place {{helpme}}
on your talk page and someone will show up shortly to answer your questions. Again, welcome! Cheers,
Tangot
a
ngo
05:18, 27 April 2006 (UTC)
Greetings. I have made significant changes to your pi-as-computed-by-Archimedes example. It's a really good example, but I think that doing it for both the inscribed and circumscribed polygons is somewhat redundant and confusing. (It wasn't redundant for Archimedes -- he needed error bounds. But we already know the answer.) The fact that the numbers get close to pi and then veer away is a nice touch. I also used exact 64-bit precision, since that's "standard".
Anyway, I thought I'd give you a 'heads up' on this; I don't know whether this is on your watchlist. Feel free to discuss this on the floating point talk page, or my talk page. William Ackerman 00:54, 27 July 2006 (UTC)
I just made a major edit to reorganize the page. (Basically, I moved the hideous "accuracy and misconceptions" section down next to the equally hideous "problems" section, so that they can all be cleaned up / butchered together.) Unfortunately, I got a notice of an editing conflict with you, apparently covering your last 2 changes: 22:03 30 Aug ("re-order for flow") and 22:08 30 Aug ("accuracy and common misconceptions"). Since my changes were much more extensive than yours, I took the liberty of saving my changes, thereby blowing yours away. I will now look at yours and attempt to repair the damage. Sorry. Someday this page (which, after all, is an extremely important subtopic of computers) will look respectable. :-) William Ackerman 23:07, 30 August 2006 (UTC)
It looks as though I didn't break anything after all. I got a false alarm. William Ackerman 00:16, 31 August 2006 (UTC)
No worries! I got into a bit of a tangle with the browser (Firefox), realising that I had forgotten a typo-level change (naturally, this is observed as one activates "post", not after activating "preview") and used the back arrow. I had earlier found via unturnut uxplurur a few days earlier that back arrow from a preview lost the entire text being edited so I didn't really think the back-arrow to add a further twiddle would work but it seemed worth a try for a quick fix but no... On restarting properly the omitted twiddle could be added.
I agree that floating-point arithmetic is important! I recall a talk I attended in which colleagues presented a graph of probabilities (of whatever), and I asked what was the significance of the Y-axis's highest annotation being not 1 but 1.01? Err... ahem... On another occasion I thought to use a short-cut for deciding how many digits to allow for a number when annotating a graph, via Log10(MaxVal), and learnt, ho ho, that on an IBM360 descendant, Log10(10.0000000) came out as 0.99999blah which when truncated to an integer was 0, not 1. Yes, I shouldn't have done it that way, but I was spending time on the much more extensive issues of annotation and layout and thought that a short cut would reduce distraction from this. In the end, a proper integer-based computation with pow:=pow*10; type stepping was prepared. And there are all the standard difficulties in computation with limited precision as well.
Nicky,
It would be fine for the example to cancel further (if that is what you are asking). I was trying to tie into an earlier example. Maybe we need to name the values. But I was trying to show that if you compute z := x + y (rounded to 7 digits) then w = z - x doesn't give you y back. And I don't understand your comment about the round AFTER the subtraction. The subtraction is exact, so the "rounding step" doesn't alter the value of the result. Thanks for the continued help with the page. --- Jake 21:16, 16 October 2006 (UTC)
Jake, Earlier I had appended an extension
e=1; s=3.141600??????... - e=1; s=3.141593??????... ---------------- e=1; s=0.000007??????... e=-5; s=7.??????...
which someone has whacked. Your text goes
e=5; s=1.235585 - e=5; s=1.234567 ---------------- e=5; s=0.001018 (true difference) e=2; s=1.018000 (after rounding/normalization)
In this, clearly the subtraction has been performed, then there is the rounding/normalisation. Your edit comment "It is not the subtraction which is the problem, it is the earlier round" is unintelligible unless "earlier" is replaced by "later" (thus my remark), though I was wondering if you were meaning to put blame on the rounding that went into the formation of the input numbers (the 1.235585 and 1.234567) which if they had been held with more accuracy would mean that the cancellation would be less damaging except of course that there are only seven digits allowed.
In these examples, there is no rounding after the subtraction only the shifting due to normalisation. Thus I erred in saying that the round was after the subtraction since there is no round. After an operation there is the renormalisation step, which in general may involve a round first, thus the order of my remark. With subtraction, only if there was a shift for alignment would there be rounding of the result, and if there is shifting the two values can't be close enough to cause cancellation! A further example would have operands that required alignment for the subtraction (as in the earlier example demonstrating subtraction) and then rounding could result as well as cancellation. (Thimks) But no.
11.23456 - 1.234561 (both seven digit, and, not nearly equal)
11.23456o (eighth digit for alignment) - 1.234561
10.000009 (subtract) 10.00001 (round to seven digits)
So, cancellation doesn't involve rounding. Loss of Significance (which does involve rounding, but that's not the problem), and Cancellation are thus two separate phenomena. Clearly, cancellation occurs when the high-order digits match (which requires the same exponent value) while rounding works away at the low-end digit.
The bit about z:=x + y; w:=z - x; relates more to the Kahan Summation algorithm, which I have also messed with though to less disfavour: perhaps WA hasn't noticed. I seem to have upset him.
Hi. I noticed you were the original author of the Interprocedural optimization page. I didn't see any external references or sources to other websites. Perhaps you could put a note on the talk page if it is your original work. Thanks - Hyad 23:36, 16 November 2006 (UTC)
As I recall, there was some reference to interprocedural optimisation in some article I had come to, that led to nothing - a red link, I think. So (feeling co-operative) to fill a hole I typed a simple short essay on the spot. The allusion is to the Halting Problem (Entscheidungsproblem) which I didn't elaborate upon but otherwise the text is mine. A much larger article could be written on details, especially along the notions of "invariants" or other properties that a procedure might require or not, create or not, and advantage be taken or not in code surrounding the procedure invocation, but I stuck with brevity and the basics. NickyMcLean 01:51, 18 November 2006 (UTC)
Hello. Please note that TeX is sophisticated; you don't need to write
(as you did at trial division) if you mean
Michael Hardy 15:32, 7 July 2007 (UTC)
Well... if it was so sophisticated, might it not recognise that <= could be rendered as a fancy symbol? And a further step might be that the "preview" could offer hints to those such as I who (in the usual way) haven't read the manual. Also, I now notice that in the "insert" menu below shown during an edit, there is a ≤ symbol. My editing is usually an escape from work-related stuff, that doesn't often involve TeX. But thanks, anyway. NickyMcLean 22:31, 9 July 2007 (UTC)
Yes, few people these days have had contact with the old machines and therefore have not had to punch cards, interact with control panel lights and switches, typed on a console typewriter to patch a program, don't know the difference between 9-edge and 12-edge, etc. However detailed procedures along with the full rationale for every step of the procedure get tedious to read (much worse than actually doing it, which quickly became an automatic motor skill after you had done it a few times). Also one could go on endlessly with operating procedures. There were also multiple different ways of doing a procedure (e.g., There are actually 3 different "clear core" instructions for the Model I: the TFM version you listed, a TF version, and a TR version - plus variants of each of those) that were used by different sites. A short summary is probably more likely to be read and understood at a basic level than long detailed procedures with expanded "commentary". If someone wants more detail they can go to the online references given.
I am thinking of putting some limited implementation specific operating procedures in the IBM 1620 Model I and IBM 1620 Model II articles. However let me have a few days or weeks to think through an organization for the material to avoid getting it all cluttered. I also don't want to just copy procedures from the manuals that are already online and can be looked at if a person is interested. -- RTC ( talk) 23:31, 25 February 2008 (UTC)
Please see Talk:Extended precision#Hyperprecision. -- Tcncv ( talk) 02:35, 19 May 2008 (UTC)
Hello NickyMcLean. Thank you for your improvements on the Tide article. I noticed that some of your edits concern the national varieties in spelling, e.g. analyse and analyze. As I understand from the Manual of Style, see WP:ENGVAR, the intend is to retain the existing variety. Best regards, Crowsnest ( talk) 21:12, 28 May 2008 (UTC)
I only recently visited (i.e. stumbled across) this article and I am astounded by the apparent complexity of such a simple (and in my day ubiquitous technique) that seems almost an afterthought in today's programming world.
It is almost suggested that this technique is only a little better than sequential (i.e. mindless) scanning and sometimes worse than generating a hash table.
There are even better techniques such as indexed branch tables (using the searched for value as the index in the first place which is a perfect hash technique - effectively requiring no hash table building) that are not even mentioned! Locality of reference is also a vastly overstated issue.
What is even more astounding is that "professional" programmers can have a serious bug outstanding (and copied) for 15 years in a what is a truly ridiculously simple procedure!
The over exuberant use of mathematical formula obscure the utter simplicity of this method which is almost entirely encapsulated in the first paragraph and needs little more explanation. ken ( talk) 21:30, 1 July 2008 (UTC)
Hello Nikky, There seems to be much too much emphasis on particular implementations of the 'technique' and catering for overflows etc. These should be part of the specific programming language/ hardware restrictions rather than covered at length in an article of this nature. As for detecting special cases caused by such restrictions, of course they should be part and parcel of the normal testing procedure in the anticipated environment. I recognize that many things are not 'anticipated' by programmers but this only goes to demonstrate their lack of adequate training (or ability).
You mentioned two languages that I am 100% familiar with (Assembler and PL/1). I worked mostly with IBM 360/370 architecture and to illustrate a point about indexed lookup (if it can be called that!), please see "Branch Tables" section in the WIKIbooks article 360 branch instructions. For short keys (or first part of longer keys) of one to two bytes, an extremely effective technique is to use the first character itself (or first two characters) as the actual 'index' to a table of further index values (i.e. for one byte key ,256 bytes table - giving extremely good locality of reference if table is close, or 32K/64K, at worst, for a two byte key). [1] I used this technique (multiple times) in almost every single program I ever wrote because most, if not all, of my programs from the early days after I discovered the technique, were table driven and in effect 'customized 4GL's' specific to a set purpose. My programs consisted of tables pointing to other tables that controlled and performed most of the required logic. The table processing code itself was fairly standard and so once written could be re-used over and over again without change (or bugs) until a special case occurred that was not covered in the existing tables. Any 'bugs' would be in the table specifications, not the programming. Parsing was a particularly fast and efficient process as it was usually simply a case of changing a few values in an table copied from a similar table used many times earlier and tested. I used the technique in my 370 instruction simulator which provided 100% software simulation of any user application program (written in any language) and included buffer overflow detection and single instruction stepping (animation). It had to be fast and the reason it was was that it used one and two byte indexing to the point of obsession. Its simulation 'engine' had zero bugs in 20+ years of use in multiple sites around the world executing time critical on-line transactions for customers. Similar techniques were used in the "works records system" I wrote (1974 spreadsheet for ICI) - it had zero bugs in 21 years of continuous use:- See [2] Bear in mind that for these and similar techniques, in general there were no 'stacks', recursive calls to sub-routines / 'memory leaks' or 'stack overflows' or similar. Many of today's languages use more instructions 'getting to the first useful instruction' than were used in the entire validation/lookup scenario (frequently less than 5 machine instructions in total). As far as I know, C for instance, does not have an equivalent technique that does not (ultimately) demand a sub-routine call in most cases(Please advise me if I am wrong on this and provide an example of code generated and instruction path-length!
Naturally my binary chop routines were generic and built around tables too. Once one table was tested thoroughly it could be re-used and invoked "ad infinitum" with any known limitations of the hardware and field sizes etc. already known.
Cheers ken ( talk) 13:51, 4 July 2008 (UTC)
I have been trying to get my hands on a section of compiled 'C' generated code for actual examples of CASE/SWITCH statements but I haven't found anyone who can comply after months of asking around. It's no good asking me to compile my own because the complexity of setting up my PC to configure/download something I could actually recognize - that didn't also come with about 500 optional 'nuances' (of library x and DLL y thingies) this implementation v. that implementation, etc etc - all much too 'three letter acronymish' for me to want to fathom. If I want to use a word processor or a spreadsheet I download t and away I go - but to play with a language like C? I need a combined first degree in acronyms, knowledge of HTML ,XSLT, W3C (nth revision) WinZIP, LooseZAP, Tar, feathers, crunch, bite, pango, glib, Gnome, GDA, python, AJAX, DAZ and "baby bio" - you name it - just to get the source of the compiler for 'C', then I (think) I need to know how to install the 'C' compiler, compile it, before I can build my program to compile - or at least that's how it appears to me!
By the way:-
1) what is BALROG?
2) if you look at the branch table example I quoted, you will also see a 2nd example using two byte offsets achieving exactly the same purpose (but requiring one extra machine instruction).
3) self modifying code - I am sure I recently added a section to wikipedia article about this very subject but it has mysteriously disappeared along with its history of me putting it there! - a sort of self-modification or 'cosmic censorship' I think. Cheers ken ( talk) 05:24, 8 July 2008 (UTC)
"If god had intended us to use modified code, he would have allowed the genome to evolve and permit Epigenetics" quote 9 july 2008 - you heard it first on Godpedia! The 'Works records System' (first interactive spreadsheet) had the bare bones of Fortran at its heart. I took Fortran and manually 'cut out all the crap', creating new code segments of a clean, re-entrant and "concatenate-able" nature which executed significantly faster than the original on extremely 'late binded' (bound?) data. It is entirely true to say that the resultant optimized code could not have been produced faster by a very competent assembler programmer - because I was that assembler programmer - and speed was my middle name! ken ( talk) 20:10, 9 July 2008 (UTC) Afterthought. You might enjoy this link [3] I did and agree with most of it! ken ( talk) 05:30, 10 July 2008 (UTC)
Minutes taken Checking No Checking 900 IBM1130 (using assembler) 24.96 18.83 Pentium 200Mc L1 I&D 16k Wunduhs98 32.7 20.4 266 Wunduhs95 11 6.1 Pentium 4 3200 L1 12k, L2 1024k WunduhsXP
Hello - could you please provide references for your addition? Thanks and happy editing. Ingolfson ( talk) 06:42, 3 July 2008 (UTC)
I tried to remove self-reference through the redirection — the
Binary search algorithm contained links to
Binary search article's sections, while the latter is a #redir to the former one. However your problems discovered to me the inconsistency in those links: some of them were capitalized, while actual section titles are all–lower case. In some magic way that makes a difference when addressing the same article's part and does not for cross–page links.
How does it work now? I changed all section links into a lowercase. If it is still bad, revert my last changes. --
CiaPan (
talk)
05:53, 18 September 2008 (UTC)
Hi Nicky!
Could you export your tide plots (and perhaps other plots you might have made) in SVG format and use them instead of the PNG versions in the articles? This is generally the preferred format for plots on Wikipedia. I think here's a free SVG exporter for Matlab: http://www.mathworks.com/matlabcentral/fileexchange/7401 Morn ( talk) 01:55, 11 November 2008 (UTC)
In orthogonal analysis, I changed the first form below to the second, which is standard:
TeX is sophisticated; there's no need for such a crude usage as the first one.
Also, one should not write a2. The correct form is a2. Digits, parentheses, etc., should not be included in these sorts of italics; see WP:MOSMATH. This is consistent with TeX style. Michael Hardy ( talk) 15:53, 25 April 2010 (UTC)
The article Orthogonal analysis has been proposed for deletion because of the following concern:
While all contributions to Wikipedia are appreciated, content or articles may be deleted for any of several reasons.
You may prevent the proposed deletion by removing the {{
dated prod}}
notice, but please explain why in your
edit summary or on
the article's talk page.
Please consider improving the article to address the issues raised. Removing {{
dated prod}}
will stop the
proposed deletion process, but other
deletion processes exist. The
speedy deletion process can result in deletion without discussion, and
articles for deletion allows discussion to reach
consensus for deletion.
RDBury (
talk)
18:07, 29 April 2010 (UTC)
In the example here and mentioned in the current talk page? Thanks. -- Paddy ( talk) 03:17, 7 July 2010 (UTC)
Thanks. -- Paddy ( talk) 06:14, 8 July 2010 (UTC)
Thanks for uploading File:Geothermal.Electricity.NZ.Poihipi.png. You don't seem to have indicated the license status of the image. Wikipedia uses a set of image copyright tags to indicate this information; to add a tag to the image, select the appropriate tag from this list, click on this link, then click "Edit this page" and add the tag to the image's description. If there doesn't seem to be a suitable tag, the image is probably not appropriate for use on Wikipedia.
For help in choosing the correct tag, or for any other questions, leave a message on Wikipedia:Media copyright questions. Thank you for your cooperation. -- ImageTaggingBot ( talk) 23:05, 23 August 2010 (UTC)
The centre of mass of the Earth Moon system is at about 3/4 of the radius of Earth. So if it played a role in determining the tidal forcing, the effect on the back side (7/8 from it) would have to be much different from the one the front (1/4 distance). In reality the only thing that counts is the gradient of the gravity field. This is only slightly different on front and back.
You are right in commenting that the horizontal component is even more important. There are two circles on the surface where the tidal force is parallel to the surface, leading to direct flow. It is difficult to work that in without interrupting the flow for the reader. − Woodstone ( talk) 14:59, 18 January 2011 (UTC)
Nice post-edit. I am uncomfortable with the large text that you moved, as the motivation for LIBF should come at the start and not merely be explained at the end. "Leapfrogging" is clever but I wonder if we really want to cover every technique to save words of code. Finally, the exclamation point doesn't look encyclopedic. Cheers. Spike-from-NH ( talk) 12:43, 2 March 2012 (UTC)
Yup, I remember an assignment from Assembler class, to code a trivial routine in the absolute minimum numer of words, the solution being to put init code inside unused in-line parameters of a LIBF.
I've already post-edited such of your additions as I took exception to above, and this morning rewrote the example. If I added any unwarranted passives, please point them out or revert them, as I'm against them too. However, there is a middle ground between "mumbling...monotone" and stand-up comedy. Spike-from-NH ( talk) 21:54, 4 March 2012 (UTC)
In the Digital PDP-10 stables, just after you learned that JUMPA (Jump Always) was what you should code rather than JUMP (which, with no suffix, jumped Never), you learned that the best instruction was usually JRST (Jump and Restore), an instruction that did a lot of miscellaneous things and, incidentally, jumped; as it had no provision for indirection and indexing, which was time-consuming even when null. These days, on the rare occasions I code in Pascal, I drop down to assembler only in the half dozen routines repeated most often; but once there, selecting the very fastest assembler code is still an obsession. Spike-from-NH ( talk) 22:27, 21 March 2012 (UTC)
With regrets, I've reverted some of your most recent edit, including all of the details of the CARD0 routine. I think this section should be an overview of assembler programming, touching on the issues of working with device drivers but omitting details such as which bits are used for what and the sequence in which CARD0 does things. Also, as mentioned in the Change History, if a program is going to check for //
before converting, then it checks the Hollerith code for /
, double-slash not having a Hollerith code; I also restored "when simplicity is desired" (despite the passive!) as the general rule is that modern programming would not use asynchronous I/O.
Spike-from-NH (
talk)
11:30, 23 March 2012 (UTC)
PS--But there is a factoid that belongs in the section on Asynchrony and performance, if it is true and if anyone can come up with a citation for it: Our computer center came to believe that if you simply coded a self-evident Fortran DO loop to read cards in the usual way, it would switch the card-reader motor on and off on every pass, so as to severely reduce its Mean Time Between Failures. Spike-from-NH ( talk) 11:34, 23 March 2012 (UTC)
I concede your point on "when simplicity is required" and have removed the entire sentence that implies there is anything modern about unbuffered I/O. Also rearranged the previous paragraph, though it still has an excessive mix of strategy versus calling convention. Am disappointed you could not confirm my memory about the Fortran use of the card reader. It was indeed the rhythmic stop-and-start of the card reader motor that our guys and IBM engineers suspected was leading to so many service calls. Spike-from-NH ( talk) 11:58, 24 March 2012 (UTC)
I don't accuse anyone of treachery when there is a simpler answer, I know how big organizations work, and IBM was always, famously, the biggest (outside the military). There would have been little ability of Card Reader Engineering to communicate to Fortran Development about the fact that the compiler, in its most typical use, would overtax the card-reader motor. (At DEC, managers hoped for interdisciplinary meetings at The Pub across the street for cross-pollination to detect problems outside the chain of command.)
If I confused you, it was not with a passive (in the technical sense), but you have just stuck one into the article, which I massaged.
We are both treading close to the line of being hit by the guy who slaps templates at the start of articles condeming "original research." We are not supposed to dump our own memories on these pages but document things the reader can verify, and go beyond, by reading the citations. The last time this happened to me, I decided that I did want to do engaging writing and not just Google searches and went to Uncyclopedia for a couple of years to write humor. Spike-from-NH ( talk) 22:09, 25 March 2012 (UTC)
You have indeed devised an opcode that complements the low-order byte of ACC, though it takes an extra memory reference. The incantation I remember is LDD *-1 \ STD *, which even made the keyboard abort impossible and required a reboot.
It doesn't seem a pity to me that we are losing expertise at this (and the kindred expertise at fitting subroutines into 128-word pages on the DEC PDP-8), as the brainpower is now being applied to more useful things. My father once lamented the waning of American manufacturing and I asked him if he would prefer good American typewriters over his quadruple bypass.
Writing a paper and then citing yourself on Wikipedia is a solution, and one I think is used more often than some people let on.
I recently recoded my venerable BASIC interpreter in Pascal. The result executes statements faster than the 1130 could move single words, would never have trouble getting an algorithm to fit in 64K, and runs fine on a used laptop that costs $50. But it is not the best approach to any problem one would have nowdays. Spike-from-NH ( talk) 21:53, 26 March 2012 (UTC)
Checking No Checking of array bounds minutes 900 IBM1130 (using assembler) 24·96 18·83 Pentium 200Mc L1 I&D 16k Wunduhs98 32·7 20·4 266 Wunduhs95 11 6·1 Pentium 4 3200 L1 12k, L2 1024k WunduhsXP
Your red carpet beats mine, as it doesn't have the odd-address restriction. The guys who thought up my incantation wanted the red carpet to still be functional after it rolled out, but there's no need for that. Your incantation for the PDP-11 is the classic one.
Regarding amount of a corpus that one knows (or thinks one knows: No one can know all the applications of knowledge nor its effects in each application), your statements remind me of Donald Rumsfeld's widely ridiculed conundrum that "often you don't know what you don't know." Being troubled by the loss of expertise is part of what turns men unwilling to throw away anything and accumulate huge stores of tools and connectors. I still have lots of EPROMs, just in case. But I did part with my EPROM programmer.
A speed improvement of 45 seems low, but it's an improvement in only one dimension. Another dimension is the improvements that allowed us to own our own "mainframes" and operate them in dusty rooms without air conditioning. Spike-from-NH ( talk) 22:50, 28 March 2012 (UTC)
My post-edit of you was because, regardless of whether what is at /0001 is XR1 itself or a copy, it is the existence of a memory address for XR1 that enables the register-to-register moves as described.
The table documented /0000, although it is dictated by convention not by hardware, because we refer to it in two other places. But the information you appended to the table strikes me as open-ended; we ought not set out to describe all the variables in the Skeleton Supervisor--though I'm contemplating gathering all the material about long Fortran programs together and mentioning the INSKEL common area. Spike-from-NH ( talk) 02:22, 4 April 2012 (UTC)
The B
instruction is not really LDX L0
(6400xxxx) but seems to be a synonym for BSC (4C00xxxx) with no modifier bits set. Separately, I can't discern the difference between BSC and BOSC.
Spike-from-NH (
talk)
15:23, 8 April 2012 (UTC)
I see no return-from-subroutine instruction in the 1130 Reference Card, and claim the writer of the interrupt service routine must have done it manually. But that's problematic too, as LDS is not the reverse of STS; the Reference Card defines only four opcodes (concerning carry and overflow), and the operand is immediate. Spike-from-NH ( talk) 23:07, 9 April 2012 (UTC)
I don't know where I got this. Probably from some other processor, but I can't think which. Thank you for the correction (to IAR). Now, in my new section on "Large Fortran programs," is "phase" the correct term for a program that chains to another program to continue a single task? Spike-from-NH ( talk) 00:24, 8 April 2012 (UTC)
Could have been your mistake, I don't know. I had assumed your Change Summary was accusing me, so I gracefully copped to it. "Phase" is indeed the term for one of the partial processes on the way to a compilation, but I don't know if it's the right term for a program that LINKs to another to complete the job. Spike-from-NH ( talk) 23:07, 9 April 2012 (UTC)
I think you are wrong to have deleted the alternative: "until an interrupt reset the machine". The average interrupt would not reset the machine, nor would prevent the infinite loop from resuming after return from interrupt. But I recall that it was usually possible after such a stuck job to use the interrupt key on the keyboard to cancel the job and seek through card input for the next job card. Spike-from-NH ( talk) 20:50, 15 April 2012 (UTC)
Have now done so. Indeed, it was not "an interrupt [that] reset the machine" because most interrupts wouldn't; it was the Int Req key, and thank you for remembering its legend. Incidentally, the thing about the "red carpet" we discussed that really conferred bragging rights is that it wiped out the service routine for Int Req and forced the operator instead to remove the cards from the reader, manually identify the offending job, and reboot. Spike-from-NH ( talk) 00:43, 17 April 2012 (UTC)
As with transfer vectors, it looks as though I am about to learn something new about the 1130 that they tried but failed to teach me in 1973. But I want to remove a great deal of technical detail from your contribution. Most notably, you walk us through how a Fortran subroutine could be coded--except that, apparently, it's not. Presumably, it's coded as a bunch of LD *-*
and STO *-*
ready for patching by SUBIN--and never an ADD instruction referencing a parameter, as your hypothetical code suggests. Given the state of the 1130 in current computing, no one needs such a detailed walk-through of the operation of SUBIN, nor of an error message you say almost never occurs. The only point of looking so deep under the covers is as another example of self-modifying code. We need a clearer summary of what the goal was: I infer that it was the replacement of hypothetical, triple-memory-access indexed instructions by direct memory accesses. After my read, it is astonishing that, for typical subprograms, this autopatching ever resulted in savings of time or memory. It isn't clear where SUBIN gets called; I infer that the call is the first instruction of every Fortran subprogram with parameters. Finally, your text implies that IOR could not be written at all. Surely the answer is that IOR was written in assembler and thus was immune from the gentle mercies of SUBIN?
Spike-from-NH (
talk)
I have tried to apply a smoother mount and dismount, and to use two subsections rather than one. I still don't think we need a complete walk-through of a routine, given that we had just slogged through the typical calling protocol in the previous section. Spike-from-NH ( talk)
And thank you for your post-edits. I took issue with only one thing: No one should need more than the most cursory mention in the prose about what intermediate code the compiler produced before SUBIN patched it. Spike-from-NH ( talk) 21:53, 6 November 2012 (UTC)
Sorry to walk on you (as they used to say in Citizens Band Radio). We are converging. But at the end of this section, we've done so much writing about heuristics and patching that I felt it necessary to say that the copying to a local variable is done by the subprogram's (human) author and not auto-magically. One other thing: I wrote "four in the case that the index register were not implemented as a memory location"--Is this the same thing as writing "four on the IBM 1800"? Spike-from-NH ( talk) 22:36, 6 November 2012 (UTC)
At the risk of walking on you again: My other change is because it is outlandish to have Fortran code appear as the comment of a line of assembler, though your meaning was clear. Spike-from-NH ( talk) 22:41, 6 November 2012 (UTC)
Oh, I don't like this at all. The sentence gave advice to the programmer on a technique to coexist optimally with SUBIN, probably of benefit to no one ever again, even if a working 1130 is found, as that guy on the talk page asks. Your additional text gives the programmer the additional advice not to hurt himself doing so. I would prefer that you strike the entire sentence. It is all true, but such a detailed how-to on actually undertaking an 1130 programming project is excessive (and further temptation to the gadflies who patrol for Original Research). Spike-from-NH ( talk) 20:44, 7 November 2012 (UTC)
Fine; I only added a comma. But declaring advice to be "the standard advice" is the type of language that impels some Wikipedians to demand a citation that I am sure you cannot provide.
Now, I have been perusing your other recent changes and, again, your insistence on specifying exactly how the index registers were implemented (register versus core) is too much detail for current readers. I understand you spent much brainwork figuring out how to save cycles in routines, a skill I too wish were as important as it used to be (though we both benefit enormously from the fact that it is not). But memorializing this, though it fits with my concept of Wikipedia as a global repository, does not fit with the dominant view of Wikipedia as an encyclopedia. More relevant to me, our article is also all there is on the instruction set of the IBM 1800, where the index registers were registers. Spike-from-NH ( talk) 22:55, 7 November 2012 (UTC)
Very well--though I think the typical use of this article is to read about the 1130, not to learn how to program it (with such details as would let the reader write optimal code). Now, back at the last sentence of your SUBIN submission, the problem recurs in the newest wording that we can't tell who is copying that parameter into the local variable. I reverted it to approximately what we had on Tuesday. (This also reinserted the clause, "to increase speed and reduce memory requirements", to which I'm not especially attached.) Spike-from-NH ( talk) 23:23, 8 November 2012 (UTC)
Then I will do so. Spike-from-NH ( talk) 01:05, 9 November 2012 (UTC)
For our next trick: I see that the article is seriously sloppy about the use of the term "subprogram" (which, according to Fortran, comprises "subroutine" and "function"). I'd like to add this convention in passing and correct each use of "subprogram" and "subroutine" to be the correct one. Spike-from-NH ( talk) 01:05, 9 November 2012 (UTC)
This is now done--but a second pair of eyes is always welcome. Spike-from-NH ( talk) 23:58, 9 November 2012 (UTC)
Well-spotted! So, as we now say at the start of Section 3, "subprogram" comprises both "subroutine" and "function." "The actual word in the source file is 'subroutine'" only when that source file defines an instance of the subroutine type of subprogram. On other definitions, the actual word is "function". The US spelling seems appropriate, as US grammar is followed elsewhere in this article about a US product. Spike-from-NH ( talk) 23:43, 11 November 2012 (UTC)
If you don't have sources for a topic, then you should not be adding material to an article. Under WP:BRD, once you've been reverted, you shouldn't try to step the material back in, but rather you should bring it up on the talk page. Stepping the material back in runs the risk of edit warring and WP:3rr violations. My sense is you are puzzling exotic argument passing out for yourself. That's a good thing to do, but that does not mean the answer that you develop should be put in the article. Sadly, Jensen's device does not mention thunk (functional programming). Glrx ( talk) 23:52, 26 June 2012 (UTC)
swap(i,A[i])
, but the exposition was not clear. They showed naive code failing by using the Copy Rule, but there was no claim that problem could be fixed by using two temporary variable (essentially a parallel assignment). Knuth in CACM 10 claimed increment(A[i])
could not be written, suggesting that the Algol 60 definition has a different view of assignments to expression -- and that there is a more fundamental problem with swap(i,A[i])
.
Glrx (
talk)
20:09, 2 July 2012 (UTC)Ah, university. I recall friends prattling on about pass-by-value and Jensen's device but never being clear on implementations. As my studies involved Physics and Mathematics and were untroubled by newfangled courses in computer science I did not have a lecturer to berate in a formal context where a proper response could be demanded. Pass-by-name was described as being as if the text of the parameter were inserted in the subprogramme wherever the formal parameter appeared. I saw immediate difficulties with scope and context (the expression may involve x, which was one thing in the caller's context and another quite different thing in the function: the equivalent of macro expansion would lead to messes) but never persuaded the proponents to demonstrate a clarifying understanding. As I was not taking the course and didn't use Algol, if they couldn't see the need for clarity I wasn't going to argue further. I was particularly annoyed by talk of the evaluation of expressions whereby parts of equal priority might be evaluated in any order; for me, order of evaluation was often important and fully defined by the tiebreaker rule of left-to-right (and also, I wanted what is now called short-circuit evaluation of logical expressions), so I was annoyed by Algolists. I can imagine an ad-hoc implementation involving special cases that would cover the usual forms as described in textbooks, but not in general. Thus, parameter "i" may be noted as being both read and written to in the function, and on account of the latter its parameter type could be made "by reference" and all invocations of the function would have to be such that the parameter, supposedly passed by name, is really passed by reference to deliver on bidirectional usage and that expressions would be rejected as being an invalid parameter manifestation. However, the second parameter, also passed-by-name by default, is seen in the function to not be assigned to, and so its type remains "by name" and thus all invocations within the function are effected by leaping out to evaluate the expression in the calling statement and returning (rather like coroutines!), even if the calling statement presented only a reference such as "i" rather than say "3*x*i^2" - in other words, the expression produces some result, an especially simple notion on a stack-based system. That is, some parameters are seen as expressions, even if having exactly the same presentation as parameters that are references and not expressions. In this circumstance, an expression such as inc(i) as the first parameter would be rejected. As a second parameter, it would be accepted, but there would be disruption over the workings of the for loop whose index variable is "i" - does the loop pre-compute the number of iterations, or, does it perform the test on the value of "i" against the bound every time? But Algol offers still further opportunities: it distinguishes between a variable and a reference to (the value of) a variable, and, "if" is a part of an expression. Thus, "x:=(if i > 6 then a else b) + 6" is a valid expression, the bracketed if-part resulting in a reference to either "a" or "b" to whose value six is to be added. Oh, and expressions can contain assignments to variables as a part also. I suppose this is where there starts to be talk of a "copy rule" for clarification, but it is not quite the style of "macro" substitution.
In the context of the article, I suppose there needn't be mention of what might go wrong (as with an expression for the "i" parameter) as we don't know what might actually go wrong only that surely it won't work, yet references are unclear. NickyMcLean ( talk) 20:34, 4 July 2012 (UTC)
I'm well aware that call-by-name is not call-by-reference. I was trying to indicate the equivalence in the case of an actual parameter being a simple variable (and a simple simple variable: not an array reference such as a(i) where "i" might be fiddled) so as to enhance the contrast for the case when the actual parameter is an expression that returns a result, which in turn is different from a statement such as "increment(x)" which is not a legitimate parameter as it does not yield a value. Potentially, readers are familiar with call-by-value and call-by-reference and so could take advantage of the comparison. And yes, I do not have a text to hand where such a discussion exists in an approved source. NickyMcLean ( talk) 20:34, 4 July 2012 (UTC)
Thanks for your note. I was referring to 'minimization' as in 'optimization'. The specific problem I'm working on is from 5 up to 100 dimensional fitting of experimental data using a fundamental kinetic model - an error minimization problem. I'm using the BFGS method which needs a numerical derivative since my problem is far too complex to solve analytically. I was considering the accuracy of a four point solution since it may accelerate convergence but realised that since I was using forward differences, I could try central differences first to get the h^2 (better than h) error. This did not improve convergence and in fact slowed everything down by a factor of 2 (~double the number of function evaluations), so accuracy of the derivatives is clearly not the limiting factor in convergence. Clearly in this case a 4 point derivative estimate would simply be 4x slower than forward differences, but it would be great to see the error reduction in the graph.
I know all about the dangers of floating point, my solutions start to degrade in accuracy at h < 1e-7. If only they had thought ahead about the needs of technical / scientific computing when defining doubles. Intel has the build-in 80bit format but it is maddeningly difficult to work out when it will use it or not - at least in a C# environment. Changing one variable from function local to class local can completely change the accuracy & result of the optimization as it drops from 80 to 64 bit representation. Doug ( talk) 19:14, 26 November 2012 (UTC)
The article Gavin Smith (author) has been proposed for deletion because it appears to have no references. Under Wikipedia policy, this newly created biography of a living person will be deleted unless it has at least one reference to a reliable source that directly supports material in the article.
If you created the article, please don't be offended. Instead, consider improving the article. For help on inserting references, see Referencing for beginners, or ask at the help desk. Once you have provided at least one reliable source, you may remove the {{ prod blp}} tag. Please do not remove the tag unless the article is sourced. If you cannot provide such a source within seven days, the article may be deleted, but you can request that it be undeleted when you are ready to add one. Lakun.patra ( talk) 18:21, 19 March 2015 (UTC)
As I stated at the bottom of ʘx's talk page, " sorry for dragging you into a discussion that is probably not in your area of expertise either."
Given how long the discussion has been, I am willing to guess you have not read all of it. There is no need to bother; I will summarize for you.
By directing you to ʘx's talk page, I was merely trying to centralize the discussion, as the discussion on the article's talk page was less active. I was also pointing out that the concern over quantum bogosort's theoretical validity does not stem solely from the physical barriers to destroying the universe but also from the underlying concepts of an algorithm and a function, which are often defined in ways that quantum bogosort fails to meet. ʘx made the point that a specific formal system would need to be chosen in which to resolve the ill-defined aspects of quantum bogosort; neither I, nor the single source in the removed article content, nor the sci-fi magazine you suggested had successfully done so.
The discussion was unnecessarily prolonged due to ʘx correcting various misunderstandings and technical flaws of mine. I struggled to respond to the technical issues while simultaneously steering the discussion toward Wikipedia policies and guidelines about the inclusion and organization of article content.
I am not entirely sure whether we will merely revert the removal, as Graeme Bartlett seems to suggest, or whether we will write a standalone article, as ʘx suggests. It depends on what sources we manage to find; more sources ought to be added in either case. I hope this helps. -- SoledadKabocha ( talk) 08:08, 6 November 2015 (UTC)
Hello, NickyMcLean. Voting in the 2016 Arbitration Committee elections is open from Monday, 00:00, 21 November through Sunday, 23:59, 4 December to all unblocked users who have registered an account before Wednesday, 00:00, 28 October 2016 and have made at least 150 mainspace edits before Sunday, 00:00, 1 November 2016.
The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.
If you wish to participate in the 2016 election, please review the candidates' statements and submit your choices on the voting page. MediaWiki message delivery ( talk) 22:08, 21 November 2016 (UTC)
Hello, NickyMcLean. Voting in the 2017 Arbitration Committee elections is now open until 23.59 on Sunday, 10 December. All users who registered an account before Saturday, 28 October 2017, made at least 150 mainspace edits before Wednesday, 1 November 2017 and are not currently blocked are eligible to vote. Users with alternate accounts may only vote once.
The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.
If you wish to participate in the 2017 election, please review the candidates and submit your choices on the voting page. MediaWiki message delivery ( talk) 18:42, 3 December 2017 (UTC)
Can you elobarate on the reasons for undoing my edit on the Kahan summation? I found it clarified the algorithm quite a bit. Summentier ( talk) 14:25, 6 November 2018 (UTC)
var sum = 0.0
as is allowed by C because there is usually no need to declare ordinary variables in psuedocode, and the added facility of them being initialised in the same statement adds complexity to the reading for no gain in the exposition. I would suggest not declaring the variables, and simply initialise them at the start of the loop. The declaration of y and t within the loop is grotesque and distracting. Imagine what a non-user of such an arrangement would wonder about this. Further, is there to be inferred some hint to the compiler that y and t are temporary variables for use within the loop only, to be undeclared outside? This is an Algol possibility. Perhaps even that the compiler should produce code whereby they might be in hardware registers? Possible register usage that wrecks the workings of the method is discussed further down. But enthusiasts for "var" usages abound. Though your Pythonish code eschews them.for i:=1:N
or maybe use "to" instead of ":" so long as it had been made clear that "input" was an array of elements indexed 1 to N. And there is always the fun caused by those computer languages that insist that arrays always start with index zero. Enthusiasts of C had tried a version where the initialisation involved input(1) and the loop ran "for i:=2:N", failing to realise that this won't work well should N = 0, as well as complicating the exposition. The idea to be conveyed is that all the elements are to be added, and the details of the control for this, and especially possible tricks, are not important compared to the exposition of the algorithm.Hello, NickyMcLean. Voting in the 2018 Arbitration Committee elections is now open until 23.59 on Sunday, 3 December. All users who registered an account before Sunday, 28 October 2018, made at least 150 mainspace edits before Thursday, 1 November 2018 and are not currently blocked are eligible to vote. Users with alternate accounts may only vote once.
The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.
If you wish to participate in the 2018 election, please review the candidates and submit your choices on the voting page. MediaWiki message delivery ( talk) 18:42, 19 November 2018 (UTC)
Unique factorization does not "have to" speak of every integer larger than one. The convention for the empty product is (1) correct and (2) widespread among people with training at, say, the advanced undergraduate level, but the potential audience for prime number includes people whose mathematics education ended in primary or secondary school (or even at the college calculus level) who were never introduced to it, and it's potentially helpful to those readers to introduce 1 as a special case. -- JBL ( talk) 10:35, 17 May 2019 (UTC)
Hello! Voting in the 2023 Arbitration Committee elections is now open until 23:59 (UTC) on Monday, 11 December 2023. All eligible users are allowed to vote. Users with alternate accounts may only vote once.
The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.
If you wish to participate in the 2023 election, please review
the candidates and submit your choices on the
voting page. If you no longer wish to receive these messages, you may add {{
NoACEMM}}
to your user talk page.
MediaWiki message delivery (
talk)
00:23, 28 November 2023 (UTC)
Welcome!
Hello, NickyMcLean, and welcome to Wikipedia! Thank you for your contributions. I hope you like the place and decide to stay. Here are a few good links for newcomers:
I hope you enjoy editing here and being a
Wikipedian! Please
sign your name on talk pages using four tildes (~~~~); this will automatically produce your name and the date. If you need help, check out
Wikipedia:Questions, ask me on my talk page, or place {{helpme}}
on your talk page and someone will show up shortly to answer your questions. Again, welcome! Cheers,
Tangot
a
ngo
05:18, 27 April 2006 (UTC)
Greetings. I have made significant changes to your pi-as-computed-by-Archimedes example. It's a really good example, but I think that doing it for both the inscribed and circumscribed polygons is somewhat redundant and confusing. (It wasn't redundant for Archimedes -- he needed error bounds. But we already know the answer.) The fact that the numbers get close to pi and then veer away is a nice touch. I also used exact 64-bit precision, since that's "standard".
Anyway, I thought I'd give you a 'heads up' on this; I don't know whether this is on your watchlist. Feel free to discuss this on the floating point talk page, or my talk page. William Ackerman 00:54, 27 July 2006 (UTC)
I just made a major edit to reorganize the page. (Basically, I moved the hideous "accuracy and misconceptions" section down next to the equally hideous "problems" section, so that they can all be cleaned up / butchered together.) Unfortunately, I got a notice of an editing conflict with you, apparently covering your last 2 changes: 22:03 30 Aug ("re-order for flow") and 22:08 30 Aug ("accuracy and common misconceptions"). Since my changes were much more extensive than yours, I took the liberty of saving my changes, thereby blowing yours away. I will now look at yours and attempt to repair the damage. Sorry. Someday this page (which, after all, is an extremely important subtopic of computers) will look respectable. :-) William Ackerman 23:07, 30 August 2006 (UTC)
It looks as though I didn't break anything after all. I got a false alarm. William Ackerman 00:16, 31 August 2006 (UTC)
No worries! I got into a bit of a tangle with the browser (Firefox), realising that I had forgotten a typo-level change (naturally, this is observed as one activates "post", not after activating "preview") and used the back arrow. I had earlier found via unturnut uxplurur a few days earlier that back arrow from a preview lost the entire text being edited so I didn't really think the back-arrow to add a further twiddle would work but it seemed worth a try for a quick fix but no... On restarting properly the omitted twiddle could be added.
I agree that floating-point arithmetic is important! I recall a talk I attended in which colleagues presented a graph of probabilities (of whatever), and I asked what was the significance of the Y-axis's highest annotation being not 1 but 1.01? Err... ahem... On another occasion I thought to use a short-cut for deciding how many digits to allow for a number when annotating a graph, via Log10(MaxVal), and learnt, ho ho, that on an IBM360 descendant, Log10(10.0000000) came out as 0.99999blah which when truncated to an integer was 0, not 1. Yes, I shouldn't have done it that way, but I was spending time on the much more extensive issues of annotation and layout and thought that a short cut would reduce distraction from this. In the end, a proper integer-based computation with pow:=pow*10; type stepping was prepared. And there are all the standard difficulties in computation with limited precision as well.
Nicky,
It would be fine for the example to cancel further (if that is what you are asking). I was trying to tie into an earlier example. Maybe we need to name the values. But I was trying to show that if you compute z := x + y (rounded to 7 digits) then w = z - x doesn't give you y back. And I don't understand your comment about the round AFTER the subtraction. The subtraction is exact, so the "rounding step" doesn't alter the value of the result. Thanks for the continued help with the page. --- Jake 21:16, 16 October 2006 (UTC)
Jake, Earlier I had appended an extension
e=1; s=3.141600??????... - e=1; s=3.141593??????... ---------------- e=1; s=0.000007??????... e=-5; s=7.??????...
which someone has whacked. Your text goes
e=5; s=1.235585 - e=5; s=1.234567 ---------------- e=5; s=0.001018 (true difference) e=2; s=1.018000 (after rounding/normalization)
In this, clearly the subtraction has been performed, then there is the rounding/normalisation. Your edit comment "It is not the subtraction which is the problem, it is the earlier round" is unintelligible unless "earlier" is replaced by "later" (thus my remark), though I was wondering if you were meaning to put blame on the rounding that went into the formation of the input numbers (the 1.235585 and 1.234567) which if they had been held with more accuracy would mean that the cancellation would be less damaging except of course that there are only seven digits allowed.
In these examples, there is no rounding after the subtraction only the shifting due to normalisation. Thus I erred in saying that the round was after the subtraction since there is no round. After an operation there is the renormalisation step, which in general may involve a round first, thus the order of my remark. With subtraction, only if there was a shift for alignment would there be rounding of the result, and if there is shifting the two values can't be close enough to cause cancellation! A further example would have operands that required alignment for the subtraction (as in the earlier example demonstrating subtraction) and then rounding could result as well as cancellation. (Thimks) But no.
11.23456 - 1.234561 (both seven digit, and, not nearly equal)
11.23456o (eighth digit for alignment) - 1.234561
10.000009 (subtract) 10.00001 (round to seven digits)
So, cancellation doesn't involve rounding. Loss of Significance (which does involve rounding, but that's not the problem), and Cancellation are thus two separate phenomena. Clearly, cancellation occurs when the high-order digits match (which requires the same exponent value) while rounding works away at the low-end digit.
The bit about z:=x + y; w:=z - x; relates more to the Kahan Summation algorithm, which I have also messed with though to less disfavour: perhaps WA hasn't noticed. I seem to have upset him.
Hi. I noticed you were the original author of the Interprocedural optimization page. I didn't see any external references or sources to other websites. Perhaps you could put a note on the talk page if it is your original work. Thanks - Hyad 23:36, 16 November 2006 (UTC)
As I recall, there was some reference to interprocedural optimisation in some article I had come to, that led to nothing - a red link, I think. So (feeling co-operative) to fill a hole I typed a simple short essay on the spot. The allusion is to the Halting Problem (Entscheidungsproblem) which I didn't elaborate upon but otherwise the text is mine. A much larger article could be written on details, especially along the notions of "invariants" or other properties that a procedure might require or not, create or not, and advantage be taken or not in code surrounding the procedure invocation, but I stuck with brevity and the basics. NickyMcLean 01:51, 18 November 2006 (UTC)
Hello. Please note that TeX is sophisticated; you don't need to write
(as you did at trial division) if you mean
Michael Hardy 15:32, 7 July 2007 (UTC)
Well... if it was so sophisticated, might it not recognise that <= could be rendered as a fancy symbol? And a further step might be that the "preview" could offer hints to those such as I who (in the usual way) haven't read the manual. Also, I now notice that in the "insert" menu below shown during an edit, there is a ≤ symbol. My editing is usually an escape from work-related stuff, that doesn't often involve TeX. But thanks, anyway. NickyMcLean 22:31, 9 July 2007 (UTC)
Yes, few people these days have had contact with the old machines and therefore have not had to punch cards, interact with control panel lights and switches, typed on a console typewriter to patch a program, don't know the difference between 9-edge and 12-edge, etc. However detailed procedures along with the full rationale for every step of the procedure get tedious to read (much worse than actually doing it, which quickly became an automatic motor skill after you had done it a few times). Also one could go on endlessly with operating procedures. There were also multiple different ways of doing a procedure (e.g., There are actually 3 different "clear core" instructions for the Model I: the TFM version you listed, a TF version, and a TR version - plus variants of each of those) that were used by different sites. A short summary is probably more likely to be read and understood at a basic level than long detailed procedures with expanded "commentary". If someone wants more detail they can go to the online references given.
I am thinking of putting some limited implementation specific operating procedures in the IBM 1620 Model I and IBM 1620 Model II articles. However let me have a few days or weeks to think through an organization for the material to avoid getting it all cluttered. I also don't want to just copy procedures from the manuals that are already online and can be looked at if a person is interested. -- RTC ( talk) 23:31, 25 February 2008 (UTC)
Please see Talk:Extended precision#Hyperprecision. -- Tcncv ( talk) 02:35, 19 May 2008 (UTC)
Hello NickyMcLean. Thank you for your improvements on the Tide article. I noticed that some of your edits concern the national varieties in spelling, e.g. analyse and analyze. As I understand from the Manual of Style, see WP:ENGVAR, the intend is to retain the existing variety. Best regards, Crowsnest ( talk) 21:12, 28 May 2008 (UTC)
I only recently visited (i.e. stumbled across) this article and I am astounded by the apparent complexity of such a simple (and in my day ubiquitous technique) that seems almost an afterthought in today's programming world.
It is almost suggested that this technique is only a little better than sequential (i.e. mindless) scanning and sometimes worse than generating a hash table.
There are even better techniques such as indexed branch tables (using the searched for value as the index in the first place which is a perfect hash technique - effectively requiring no hash table building) that are not even mentioned! Locality of reference is also a vastly overstated issue.
What is even more astounding is that "professional" programmers can have a serious bug outstanding (and copied) for 15 years in a what is a truly ridiculously simple procedure!
The over exuberant use of mathematical formula obscure the utter simplicity of this method which is almost entirely encapsulated in the first paragraph and needs little more explanation. ken ( talk) 21:30, 1 July 2008 (UTC)
Hello Nikky, There seems to be much too much emphasis on particular implementations of the 'technique' and catering for overflows etc. These should be part of the specific programming language/ hardware restrictions rather than covered at length in an article of this nature. As for detecting special cases caused by such restrictions, of course they should be part and parcel of the normal testing procedure in the anticipated environment. I recognize that many things are not 'anticipated' by programmers but this only goes to demonstrate their lack of adequate training (or ability).
You mentioned two languages that I am 100% familiar with (Assembler and PL/1). I worked mostly with IBM 360/370 architecture and to illustrate a point about indexed lookup (if it can be called that!), please see "Branch Tables" section in the WIKIbooks article 360 branch instructions. For short keys (or first part of longer keys) of one to two bytes, an extremely effective technique is to use the first character itself (or first two characters) as the actual 'index' to a table of further index values (i.e. for one byte key ,256 bytes table - giving extremely good locality of reference if table is close, or 32K/64K, at worst, for a two byte key). [1] I used this technique (multiple times) in almost every single program I ever wrote because most, if not all, of my programs from the early days after I discovered the technique, were table driven and in effect 'customized 4GL's' specific to a set purpose. My programs consisted of tables pointing to other tables that controlled and performed most of the required logic. The table processing code itself was fairly standard and so once written could be re-used over and over again without change (or bugs) until a special case occurred that was not covered in the existing tables. Any 'bugs' would be in the table specifications, not the programming. Parsing was a particularly fast and efficient process as it was usually simply a case of changing a few values in an table copied from a similar table used many times earlier and tested. I used the technique in my 370 instruction simulator which provided 100% software simulation of any user application program (written in any language) and included buffer overflow detection and single instruction stepping (animation). It had to be fast and the reason it was was that it used one and two byte indexing to the point of obsession. Its simulation 'engine' had zero bugs in 20+ years of use in multiple sites around the world executing time critical on-line transactions for customers. Similar techniques were used in the "works records system" I wrote (1974 spreadsheet for ICI) - it had zero bugs in 21 years of continuous use:- See [2] Bear in mind that for these and similar techniques, in general there were no 'stacks', recursive calls to sub-routines / 'memory leaks' or 'stack overflows' or similar. Many of today's languages use more instructions 'getting to the first useful instruction' than were used in the entire validation/lookup scenario (frequently less than 5 machine instructions in total). As far as I know, C for instance, does not have an equivalent technique that does not (ultimately) demand a sub-routine call in most cases(Please advise me if I am wrong on this and provide an example of code generated and instruction path-length!
Naturally my binary chop routines were generic and built around tables too. Once one table was tested thoroughly it could be re-used and invoked "ad infinitum" with any known limitations of the hardware and field sizes etc. already known.
Cheers ken ( talk) 13:51, 4 July 2008 (UTC)
I have been trying to get my hands on a section of compiled 'C' generated code for actual examples of CASE/SWITCH statements but I haven't found anyone who can comply after months of asking around. It's no good asking me to compile my own because the complexity of setting up my PC to configure/download something I could actually recognize - that didn't also come with about 500 optional 'nuances' (of library x and DLL y thingies) this implementation v. that implementation, etc etc - all much too 'three letter acronymish' for me to want to fathom. If I want to use a word processor or a spreadsheet I download t and away I go - but to play with a language like C? I need a combined first degree in acronyms, knowledge of HTML ,XSLT, W3C (nth revision) WinZIP, LooseZAP, Tar, feathers, crunch, bite, pango, glib, Gnome, GDA, python, AJAX, DAZ and "baby bio" - you name it - just to get the source of the compiler for 'C', then I (think) I need to know how to install the 'C' compiler, compile it, before I can build my program to compile - or at least that's how it appears to me!
By the way:-
1) what is BALROG?
2) if you look at the branch table example I quoted, you will also see a 2nd example using two byte offsets achieving exactly the same purpose (but requiring one extra machine instruction).
3) self modifying code - I am sure I recently added a section to wikipedia article about this very subject but it has mysteriously disappeared along with its history of me putting it there! - a sort of self-modification or 'cosmic censorship' I think. Cheers ken ( talk) 05:24, 8 July 2008 (UTC)
"If god had intended us to use modified code, he would have allowed the genome to evolve and permit Epigenetics" quote 9 july 2008 - you heard it first on Godpedia! The 'Works records System' (first interactive spreadsheet) had the bare bones of Fortran at its heart. I took Fortran and manually 'cut out all the crap', creating new code segments of a clean, re-entrant and "concatenate-able" nature which executed significantly faster than the original on extremely 'late binded' (bound?) data. It is entirely true to say that the resultant optimized code could not have been produced faster by a very competent assembler programmer - because I was that assembler programmer - and speed was my middle name! ken ( talk) 20:10, 9 July 2008 (UTC) Afterthought. You might enjoy this link [3] I did and agree with most of it! ken ( talk) 05:30, 10 July 2008 (UTC)
Minutes taken Checking No Checking 900 IBM1130 (using assembler) 24.96 18.83 Pentium 200Mc L1 I&D 16k Wunduhs98 32.7 20.4 266 Wunduhs95 11 6.1 Pentium 4 3200 L1 12k, L2 1024k WunduhsXP
Hello - could you please provide references for your addition? Thanks and happy editing. Ingolfson ( talk) 06:42, 3 July 2008 (UTC)
I tried to remove self-reference through the redirection — the
Binary search algorithm contained links to
Binary search article's sections, while the latter is a #redir to the former one. However your problems discovered to me the inconsistency in those links: some of them were capitalized, while actual section titles are all–lower case. In some magic way that makes a difference when addressing the same article's part and does not for cross–page links.
How does it work now? I changed all section links into a lowercase. If it is still bad, revert my last changes. --
CiaPan (
talk)
05:53, 18 September 2008 (UTC)
Hi Nicky!
Could you export your tide plots (and perhaps other plots you might have made) in SVG format and use them instead of the PNG versions in the articles? This is generally the preferred format for plots on Wikipedia. I think here's a free SVG exporter for Matlab: http://www.mathworks.com/matlabcentral/fileexchange/7401 Morn ( talk) 01:55, 11 November 2008 (UTC)
In orthogonal analysis, I changed the first form below to the second, which is standard:
TeX is sophisticated; there's no need for such a crude usage as the first one.
Also, one should not write a2. The correct form is a2. Digits, parentheses, etc., should not be included in these sorts of italics; see WP:MOSMATH. This is consistent with TeX style. Michael Hardy ( talk) 15:53, 25 April 2010 (UTC)
The article Orthogonal analysis has been proposed for deletion because of the following concern:
While all contributions to Wikipedia are appreciated, content or articles may be deleted for any of several reasons.
You may prevent the proposed deletion by removing the {{
dated prod}}
notice, but please explain why in your
edit summary or on
the article's talk page.
Please consider improving the article to address the issues raised. Removing {{
dated prod}}
will stop the
proposed deletion process, but other
deletion processes exist. The
speedy deletion process can result in deletion without discussion, and
articles for deletion allows discussion to reach
consensus for deletion.
RDBury (
talk)
18:07, 29 April 2010 (UTC)
In the example here and mentioned in the current talk page? Thanks. -- Paddy ( talk) 03:17, 7 July 2010 (UTC)
Thanks. -- Paddy ( talk) 06:14, 8 July 2010 (UTC)
Thanks for uploading File:Geothermal.Electricity.NZ.Poihipi.png. You don't seem to have indicated the license status of the image. Wikipedia uses a set of image copyright tags to indicate this information; to add a tag to the image, select the appropriate tag from this list, click on this link, then click "Edit this page" and add the tag to the image's description. If there doesn't seem to be a suitable tag, the image is probably not appropriate for use on Wikipedia.
For help in choosing the correct tag, or for any other questions, leave a message on Wikipedia:Media copyright questions. Thank you for your cooperation. -- ImageTaggingBot ( talk) 23:05, 23 August 2010 (UTC)
The centre of mass of the Earth Moon system is at about 3/4 of the radius of Earth. So if it played a role in determining the tidal forcing, the effect on the back side (7/8 from it) would have to be much different from the one the front (1/4 distance). In reality the only thing that counts is the gradient of the gravity field. This is only slightly different on front and back.
You are right in commenting that the horizontal component is even more important. There are two circles on the surface where the tidal force is parallel to the surface, leading to direct flow. It is difficult to work that in without interrupting the flow for the reader. − Woodstone ( talk) 14:59, 18 January 2011 (UTC)
Nice post-edit. I am uncomfortable with the large text that you moved, as the motivation for LIBF should come at the start and not merely be explained at the end. "Leapfrogging" is clever but I wonder if we really want to cover every technique to save words of code. Finally, the exclamation point doesn't look encyclopedic. Cheers. Spike-from-NH ( talk) 12:43, 2 March 2012 (UTC)
Yup, I remember an assignment from Assembler class, to code a trivial routine in the absolute minimum numer of words, the solution being to put init code inside unused in-line parameters of a LIBF.
I've already post-edited such of your additions as I took exception to above, and this morning rewrote the example. If I added any unwarranted passives, please point them out or revert them, as I'm against them too. However, there is a middle ground between "mumbling...monotone" and stand-up comedy. Spike-from-NH ( talk) 21:54, 4 March 2012 (UTC)
In the Digital PDP-10 stables, just after you learned that JUMPA (Jump Always) was what you should code rather than JUMP (which, with no suffix, jumped Never), you learned that the best instruction was usually JRST (Jump and Restore), an instruction that did a lot of miscellaneous things and, incidentally, jumped; as it had no provision for indirection and indexing, which was time-consuming even when null. These days, on the rare occasions I code in Pascal, I drop down to assembler only in the half dozen routines repeated most often; but once there, selecting the very fastest assembler code is still an obsession. Spike-from-NH ( talk) 22:27, 21 March 2012 (UTC)
With regrets, I've reverted some of your most recent edit, including all of the details of the CARD0 routine. I think this section should be an overview of assembler programming, touching on the issues of working with device drivers but omitting details such as which bits are used for what and the sequence in which CARD0 does things. Also, as mentioned in the Change History, if a program is going to check for //
before converting, then it checks the Hollerith code for /
, double-slash not having a Hollerith code; I also restored "when simplicity is desired" (despite the passive!) as the general rule is that modern programming would not use asynchronous I/O.
Spike-from-NH (
talk)
11:30, 23 March 2012 (UTC)
PS--But there is a factoid that belongs in the section on Asynchrony and performance, if it is true and if anyone can come up with a citation for it: Our computer center came to believe that if you simply coded a self-evident Fortran DO loop to read cards in the usual way, it would switch the card-reader motor on and off on every pass, so as to severely reduce its Mean Time Between Failures. Spike-from-NH ( talk) 11:34, 23 March 2012 (UTC)
I concede your point on "when simplicity is required" and have removed the entire sentence that implies there is anything modern about unbuffered I/O. Also rearranged the previous paragraph, though it still has an excessive mix of strategy versus calling convention. Am disappointed you could not confirm my memory about the Fortran use of the card reader. It was indeed the rhythmic stop-and-start of the card reader motor that our guys and IBM engineers suspected was leading to so many service calls. Spike-from-NH ( talk) 11:58, 24 March 2012 (UTC)
I don't accuse anyone of treachery when there is a simpler answer, I know how big organizations work, and IBM was always, famously, the biggest (outside the military). There would have been little ability of Card Reader Engineering to communicate to Fortran Development about the fact that the compiler, in its most typical use, would overtax the card-reader motor. (At DEC, managers hoped for interdisciplinary meetings at The Pub across the street for cross-pollination to detect problems outside the chain of command.)
If I confused you, it was not with a passive (in the technical sense), but you have just stuck one into the article, which I massaged.
We are both treading close to the line of being hit by the guy who slaps templates at the start of articles condeming "original research." We are not supposed to dump our own memories on these pages but document things the reader can verify, and go beyond, by reading the citations. The last time this happened to me, I decided that I did want to do engaging writing and not just Google searches and went to Uncyclopedia for a couple of years to write humor. Spike-from-NH ( talk) 22:09, 25 March 2012 (UTC)
You have indeed devised an opcode that complements the low-order byte of ACC, though it takes an extra memory reference. The incantation I remember is LDD *-1 \ STD *, which even made the keyboard abort impossible and required a reboot.
It doesn't seem a pity to me that we are losing expertise at this (and the kindred expertise at fitting subroutines into 128-word pages on the DEC PDP-8), as the brainpower is now being applied to more useful things. My father once lamented the waning of American manufacturing and I asked him if he would prefer good American typewriters over his quadruple bypass.
Writing a paper and then citing yourself on Wikipedia is a solution, and one I think is used more often than some people let on.
I recently recoded my venerable BASIC interpreter in Pascal. The result executes statements faster than the 1130 could move single words, would never have trouble getting an algorithm to fit in 64K, and runs fine on a used laptop that costs $50. But it is not the best approach to any problem one would have nowdays. Spike-from-NH ( talk) 21:53, 26 March 2012 (UTC)
Checking No Checking of array bounds minutes 900 IBM1130 (using assembler) 24·96 18·83 Pentium 200Mc L1 I&D 16k Wunduhs98 32·7 20·4 266 Wunduhs95 11 6·1 Pentium 4 3200 L1 12k, L2 1024k WunduhsXP
Your red carpet beats mine, as it doesn't have the odd-address restriction. The guys who thought up my incantation wanted the red carpet to still be functional after it rolled out, but there's no need for that. Your incantation for the PDP-11 is the classic one.
Regarding amount of a corpus that one knows (or thinks one knows: No one can know all the applications of knowledge nor its effects in each application), your statements remind me of Donald Rumsfeld's widely ridiculed conundrum that "often you don't know what you don't know." Being troubled by the loss of expertise is part of what turns men unwilling to throw away anything and accumulate huge stores of tools and connectors. I still have lots of EPROMs, just in case. But I did part with my EPROM programmer.
A speed improvement of 45 seems low, but it's an improvement in only one dimension. Another dimension is the improvements that allowed us to own our own "mainframes" and operate them in dusty rooms without air conditioning. Spike-from-NH ( talk) 22:50, 28 March 2012 (UTC)
My post-edit of you was because, regardless of whether what is at /0001 is XR1 itself or a copy, it is the existence of a memory address for XR1 that enables the register-to-register moves as described.
The table documented /0000, although it is dictated by convention not by hardware, because we refer to it in two other places. But the information you appended to the table strikes me as open-ended; we ought not set out to describe all the variables in the Skeleton Supervisor--though I'm contemplating gathering all the material about long Fortran programs together and mentioning the INSKEL common area. Spike-from-NH ( talk) 02:22, 4 April 2012 (UTC)
The B
instruction is not really LDX L0
(6400xxxx) but seems to be a synonym for BSC (4C00xxxx) with no modifier bits set. Separately, I can't discern the difference between BSC and BOSC.
Spike-from-NH (
talk)
15:23, 8 April 2012 (UTC)
I see no return-from-subroutine instruction in the 1130 Reference Card, and claim the writer of the interrupt service routine must have done it manually. But that's problematic too, as LDS is not the reverse of STS; the Reference Card defines only four opcodes (concerning carry and overflow), and the operand is immediate. Spike-from-NH ( talk) 23:07, 9 April 2012 (UTC)
I don't know where I got this. Probably from some other processor, but I can't think which. Thank you for the correction (to IAR). Now, in my new section on "Large Fortran programs," is "phase" the correct term for a program that chains to another program to continue a single task? Spike-from-NH ( talk) 00:24, 8 April 2012 (UTC)
Could have been your mistake, I don't know. I had assumed your Change Summary was accusing me, so I gracefully copped to it. "Phase" is indeed the term for one of the partial processes on the way to a compilation, but I don't know if it's the right term for a program that LINKs to another to complete the job. Spike-from-NH ( talk) 23:07, 9 April 2012 (UTC)
I think you are wrong to have deleted the alternative: "until an interrupt reset the machine". The average interrupt would not reset the machine, nor would prevent the infinite loop from resuming after return from interrupt. But I recall that it was usually possible after such a stuck job to use the interrupt key on the keyboard to cancel the job and seek through card input for the next job card. Spike-from-NH ( talk) 20:50, 15 April 2012 (UTC)
Have now done so. Indeed, it was not "an interrupt [that] reset the machine" because most interrupts wouldn't; it was the Int Req key, and thank you for remembering its legend. Incidentally, the thing about the "red carpet" we discussed that really conferred bragging rights is that it wiped out the service routine for Int Req and forced the operator instead to remove the cards from the reader, manually identify the offending job, and reboot. Spike-from-NH ( talk) 00:43, 17 April 2012 (UTC)
As with transfer vectors, it looks as though I am about to learn something new about the 1130 that they tried but failed to teach me in 1973. But I want to remove a great deal of technical detail from your contribution. Most notably, you walk us through how a Fortran subroutine could be coded--except that, apparently, it's not. Presumably, it's coded as a bunch of LD *-*
and STO *-*
ready for patching by SUBIN--and never an ADD instruction referencing a parameter, as your hypothetical code suggests. Given the state of the 1130 in current computing, no one needs such a detailed walk-through of the operation of SUBIN, nor of an error message you say almost never occurs. The only point of looking so deep under the covers is as another example of self-modifying code. We need a clearer summary of what the goal was: I infer that it was the replacement of hypothetical, triple-memory-access indexed instructions by direct memory accesses. After my read, it is astonishing that, for typical subprograms, this autopatching ever resulted in savings of time or memory. It isn't clear where SUBIN gets called; I infer that the call is the first instruction of every Fortran subprogram with parameters. Finally, your text implies that IOR could not be written at all. Surely the answer is that IOR was written in assembler and thus was immune from the gentle mercies of SUBIN?
Spike-from-NH (
talk)
I have tried to apply a smoother mount and dismount, and to use two subsections rather than one. I still don't think we need a complete walk-through of a routine, given that we had just slogged through the typical calling protocol in the previous section. Spike-from-NH ( talk)
And thank you for your post-edits. I took issue with only one thing: No one should need more than the most cursory mention in the prose about what intermediate code the compiler produced before SUBIN patched it. Spike-from-NH ( talk) 21:53, 6 November 2012 (UTC)
Sorry to walk on you (as they used to say in Citizens Band Radio). We are converging. But at the end of this section, we've done so much writing about heuristics and patching that I felt it necessary to say that the copying to a local variable is done by the subprogram's (human) author and not auto-magically. One other thing: I wrote "four in the case that the index register were not implemented as a memory location"--Is this the same thing as writing "four on the IBM 1800"? Spike-from-NH ( talk) 22:36, 6 November 2012 (UTC)
At the risk of walking on you again: My other change is because it is outlandish to have Fortran code appear as the comment of a line of assembler, though your meaning was clear. Spike-from-NH ( talk) 22:41, 6 November 2012 (UTC)
Oh, I don't like this at all. The sentence gave advice to the programmer on a technique to coexist optimally with SUBIN, probably of benefit to no one ever again, even if a working 1130 is found, as that guy on the talk page asks. Your additional text gives the programmer the additional advice not to hurt himself doing so. I would prefer that you strike the entire sentence. It is all true, but such a detailed how-to on actually undertaking an 1130 programming project is excessive (and further temptation to the gadflies who patrol for Original Research). Spike-from-NH ( talk) 20:44, 7 November 2012 (UTC)
Fine; I only added a comma. But declaring advice to be "the standard advice" is the type of language that impels some Wikipedians to demand a citation that I am sure you cannot provide.
Now, I have been perusing your other recent changes and, again, your insistence on specifying exactly how the index registers were implemented (register versus core) is too much detail for current readers. I understand you spent much brainwork figuring out how to save cycles in routines, a skill I too wish were as important as it used to be (though we both benefit enormously from the fact that it is not). But memorializing this, though it fits with my concept of Wikipedia as a global repository, does not fit with the dominant view of Wikipedia as an encyclopedia. More relevant to me, our article is also all there is on the instruction set of the IBM 1800, where the index registers were registers. Spike-from-NH ( talk) 22:55, 7 November 2012 (UTC)
Very well--though I think the typical use of this article is to read about the 1130, not to learn how to program it (with such details as would let the reader write optimal code). Now, back at the last sentence of your SUBIN submission, the problem recurs in the newest wording that we can't tell who is copying that parameter into the local variable. I reverted it to approximately what we had on Tuesday. (This also reinserted the clause, "to increase speed and reduce memory requirements", to which I'm not especially attached.) Spike-from-NH ( talk) 23:23, 8 November 2012 (UTC)
Then I will do so. Spike-from-NH ( talk) 01:05, 9 November 2012 (UTC)
For our next trick: I see that the article is seriously sloppy about the use of the term "subprogram" (which, according to Fortran, comprises "subroutine" and "function"). I'd like to add this convention in passing and correct each use of "subprogram" and "subroutine" to be the correct one. Spike-from-NH ( talk) 01:05, 9 November 2012 (UTC)
This is now done--but a second pair of eyes is always welcome. Spike-from-NH ( talk) 23:58, 9 November 2012 (UTC)
Well-spotted! So, as we now say at the start of Section 3, "subprogram" comprises both "subroutine" and "function." "The actual word in the source file is 'subroutine'" only when that source file defines an instance of the subroutine type of subprogram. On other definitions, the actual word is "function". The US spelling seems appropriate, as US grammar is followed elsewhere in this article about a US product. Spike-from-NH ( talk) 23:43, 11 November 2012 (UTC)
If you don't have sources for a topic, then you should not be adding material to an article. Under WP:BRD, once you've been reverted, you shouldn't try to step the material back in, but rather you should bring it up on the talk page. Stepping the material back in runs the risk of edit warring and WP:3rr violations. My sense is you are puzzling exotic argument passing out for yourself. That's a good thing to do, but that does not mean the answer that you develop should be put in the article. Sadly, Jensen's device does not mention thunk (functional programming). Glrx ( talk) 23:52, 26 June 2012 (UTC)
swap(i,A[i])
, but the exposition was not clear. They showed naive code failing by using the Copy Rule, but there was no claim that problem could be fixed by using two temporary variable (essentially a parallel assignment). Knuth in CACM 10 claimed increment(A[i])
could not be written, suggesting that the Algol 60 definition has a different view of assignments to expression -- and that there is a more fundamental problem with swap(i,A[i])
.
Glrx (
talk)
20:09, 2 July 2012 (UTC)Ah, university. I recall friends prattling on about pass-by-value and Jensen's device but never being clear on implementations. As my studies involved Physics and Mathematics and were untroubled by newfangled courses in computer science I did not have a lecturer to berate in a formal context where a proper response could be demanded. Pass-by-name was described as being as if the text of the parameter were inserted in the subprogramme wherever the formal parameter appeared. I saw immediate difficulties with scope and context (the expression may involve x, which was one thing in the caller's context and another quite different thing in the function: the equivalent of macro expansion would lead to messes) but never persuaded the proponents to demonstrate a clarifying understanding. As I was not taking the course and didn't use Algol, if they couldn't see the need for clarity I wasn't going to argue further. I was particularly annoyed by talk of the evaluation of expressions whereby parts of equal priority might be evaluated in any order; for me, order of evaluation was often important and fully defined by the tiebreaker rule of left-to-right (and also, I wanted what is now called short-circuit evaluation of logical expressions), so I was annoyed by Algolists. I can imagine an ad-hoc implementation involving special cases that would cover the usual forms as described in textbooks, but not in general. Thus, parameter "i" may be noted as being both read and written to in the function, and on account of the latter its parameter type could be made "by reference" and all invocations of the function would have to be such that the parameter, supposedly passed by name, is really passed by reference to deliver on bidirectional usage and that expressions would be rejected as being an invalid parameter manifestation. However, the second parameter, also passed-by-name by default, is seen in the function to not be assigned to, and so its type remains "by name" and thus all invocations within the function are effected by leaping out to evaluate the expression in the calling statement and returning (rather like coroutines!), even if the calling statement presented only a reference such as "i" rather than say "3*x*i^2" - in other words, the expression produces some result, an especially simple notion on a stack-based system. That is, some parameters are seen as expressions, even if having exactly the same presentation as parameters that are references and not expressions. In this circumstance, an expression such as inc(i) as the first parameter would be rejected. As a second parameter, it would be accepted, but there would be disruption over the workings of the for loop whose index variable is "i" - does the loop pre-compute the number of iterations, or, does it perform the test on the value of "i" against the bound every time? But Algol offers still further opportunities: it distinguishes between a variable and a reference to (the value of) a variable, and, "if" is a part of an expression. Thus, "x:=(if i > 6 then a else b) + 6" is a valid expression, the bracketed if-part resulting in a reference to either "a" or "b" to whose value six is to be added. Oh, and expressions can contain assignments to variables as a part also. I suppose this is where there starts to be talk of a "copy rule" for clarification, but it is not quite the style of "macro" substitution.
In the context of the article, I suppose there needn't be mention of what might go wrong (as with an expression for the "i" parameter) as we don't know what might actually go wrong only that surely it won't work, yet references are unclear. NickyMcLean ( talk) 20:34, 4 July 2012 (UTC)
I'm well aware that call-by-name is not call-by-reference. I was trying to indicate the equivalence in the case of an actual parameter being a simple variable (and a simple simple variable: not an array reference such as a(i) where "i" might be fiddled) so as to enhance the contrast for the case when the actual parameter is an expression that returns a result, which in turn is different from a statement such as "increment(x)" which is not a legitimate parameter as it does not yield a value. Potentially, readers are familiar with call-by-value and call-by-reference and so could take advantage of the comparison. And yes, I do not have a text to hand where such a discussion exists in an approved source. NickyMcLean ( talk) 20:34, 4 July 2012 (UTC)
Thanks for your note. I was referring to 'minimization' as in 'optimization'. The specific problem I'm working on is from 5 up to 100 dimensional fitting of experimental data using a fundamental kinetic model - an error minimization problem. I'm using the BFGS method which needs a numerical derivative since my problem is far too complex to solve analytically. I was considering the accuracy of a four point solution since it may accelerate convergence but realised that since I was using forward differences, I could try central differences first to get the h^2 (better than h) error. This did not improve convergence and in fact slowed everything down by a factor of 2 (~double the number of function evaluations), so accuracy of the derivatives is clearly not the limiting factor in convergence. Clearly in this case a 4 point derivative estimate would simply be 4x slower than forward differences, but it would be great to see the error reduction in the graph.
I know all about the dangers of floating point, my solutions start to degrade in accuracy at h < 1e-7. If only they had thought ahead about the needs of technical / scientific computing when defining doubles. Intel has the build-in 80bit format but it is maddeningly difficult to work out when it will use it or not - at least in a C# environment. Changing one variable from function local to class local can completely change the accuracy & result of the optimization as it drops from 80 to 64 bit representation. Doug ( talk) 19:14, 26 November 2012 (UTC)
The article Gavin Smith (author) has been proposed for deletion because it appears to have no references. Under Wikipedia policy, this newly created biography of a living person will be deleted unless it has at least one reference to a reliable source that directly supports material in the article.
If you created the article, please don't be offended. Instead, consider improving the article. For help on inserting references, see Referencing for beginners, or ask at the help desk. Once you have provided at least one reliable source, you may remove the {{ prod blp}} tag. Please do not remove the tag unless the article is sourced. If you cannot provide such a source within seven days, the article may be deleted, but you can request that it be undeleted when you are ready to add one. Lakun.patra ( talk) 18:21, 19 March 2015 (UTC)
As I stated at the bottom of ʘx's talk page, " sorry for dragging you into a discussion that is probably not in your area of expertise either."
Given how long the discussion has been, I am willing to guess you have not read all of it. There is no need to bother; I will summarize for you.
By directing you to ʘx's talk page, I was merely trying to centralize the discussion, as the discussion on the article's talk page was less active. I was also pointing out that the concern over quantum bogosort's theoretical validity does not stem solely from the physical barriers to destroying the universe but also from the underlying concepts of an algorithm and a function, which are often defined in ways that quantum bogosort fails to meet. ʘx made the point that a specific formal system would need to be chosen in which to resolve the ill-defined aspects of quantum bogosort; neither I, nor the single source in the removed article content, nor the sci-fi magazine you suggested had successfully done so.
The discussion was unnecessarily prolonged due to ʘx correcting various misunderstandings and technical flaws of mine. I struggled to respond to the technical issues while simultaneously steering the discussion toward Wikipedia policies and guidelines about the inclusion and organization of article content.
I am not entirely sure whether we will merely revert the removal, as Graeme Bartlett seems to suggest, or whether we will write a standalone article, as ʘx suggests. It depends on what sources we manage to find; more sources ought to be added in either case. I hope this helps. -- SoledadKabocha ( talk) 08:08, 6 November 2015 (UTC)
Hello, NickyMcLean. Voting in the 2016 Arbitration Committee elections is open from Monday, 00:00, 21 November through Sunday, 23:59, 4 December to all unblocked users who have registered an account before Wednesday, 00:00, 28 October 2016 and have made at least 150 mainspace edits before Sunday, 00:00, 1 November 2016.
The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.
If you wish to participate in the 2016 election, please review the candidates' statements and submit your choices on the voting page. MediaWiki message delivery ( talk) 22:08, 21 November 2016 (UTC)
Hello, NickyMcLean. Voting in the 2017 Arbitration Committee elections is now open until 23.59 on Sunday, 10 December. All users who registered an account before Saturday, 28 October 2017, made at least 150 mainspace edits before Wednesday, 1 November 2017 and are not currently blocked are eligible to vote. Users with alternate accounts may only vote once.
The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.
If you wish to participate in the 2017 election, please review the candidates and submit your choices on the voting page. MediaWiki message delivery ( talk) 18:42, 3 December 2017 (UTC)
Can you elobarate on the reasons for undoing my edit on the Kahan summation? I found it clarified the algorithm quite a bit. Summentier ( talk) 14:25, 6 November 2018 (UTC)
var sum = 0.0
as is allowed by C because there is usually no need to declare ordinary variables in psuedocode, and the added facility of them being initialised in the same statement adds complexity to the reading for no gain in the exposition. I would suggest not declaring the variables, and simply initialise them at the start of the loop. The declaration of y and t within the loop is grotesque and distracting. Imagine what a non-user of such an arrangement would wonder about this. Further, is there to be inferred some hint to the compiler that y and t are temporary variables for use within the loop only, to be undeclared outside? This is an Algol possibility. Perhaps even that the compiler should produce code whereby they might be in hardware registers? Possible register usage that wrecks the workings of the method is discussed further down. But enthusiasts for "var" usages abound. Though your Pythonish code eschews them.for i:=1:N
or maybe use "to" instead of ":" so long as it had been made clear that "input" was an array of elements indexed 1 to N. And there is always the fun caused by those computer languages that insist that arrays always start with index zero. Enthusiasts of C had tried a version where the initialisation involved input(1) and the loop ran "for i:=2:N", failing to realise that this won't work well should N = 0, as well as complicating the exposition. The idea to be conveyed is that all the elements are to be added, and the details of the control for this, and especially possible tricks, are not important compared to the exposition of the algorithm.Hello, NickyMcLean. Voting in the 2018 Arbitration Committee elections is now open until 23.59 on Sunday, 3 December. All users who registered an account before Sunday, 28 October 2018, made at least 150 mainspace edits before Thursday, 1 November 2018 and are not currently blocked are eligible to vote. Users with alternate accounts may only vote once.
The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.
If you wish to participate in the 2018 election, please review the candidates and submit your choices on the voting page. MediaWiki message delivery ( talk) 18:42, 19 November 2018 (UTC)
Unique factorization does not "have to" speak of every integer larger than one. The convention for the empty product is (1) correct and (2) widespread among people with training at, say, the advanced undergraduate level, but the potential audience for prime number includes people whose mathematics education ended in primary or secondary school (or even at the college calculus level) who were never introduced to it, and it's potentially helpful to those readers to introduce 1 as a special case. -- JBL ( talk) 10:35, 17 May 2019 (UTC)
Hello! Voting in the 2023 Arbitration Committee elections is now open until 23:59 (UTC) on Monday, 11 December 2023. All eligible users are allowed to vote. Users with alternate accounts may only vote once.
The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.
If you wish to participate in the 2023 election, please review
the candidates and submit your choices on the
voting page. If you no longer wish to receive these messages, you may add {{
NoACEMM}}
to your user talk page.
MediaWiki message delivery (
talk)
00:23, 28 November 2023 (UTC)