Dear string-to-integer parsers…

These are very useful functions that any language with distinct string and integer types will include in their standard library. Pass in a string with decimal digits and it’ll return the equivalent in the binary integer form that you can do mathematics with.

I’d like to make a modest proposal that I’d find very useful, and maybe you, dear reader, would too.

“The rich man in his castle, the poor man at his gate. He made them, high or lowly, and ordered their estate.”

Who me?

Specifically, I’m thinking of parser functions that work like this…

ParseInt("123");      // 123.
ParseInt("-456");     // -456.
ParseInt("Rutabaga"); // Rejected.

Note that by “rejected”, it could mean anything in practice as long as the response is distinct from returning a number. Maybe it throws an exception, maybe it returns null, maybe it also returns a Boolean to tell you if the string value was valid or not.

Point is, I’m thinking of parser functions that have two distinct kinds of result. A success result that includes the integer value, or a rejection result. No half-way results.

I will acknowledge that there are standard library functions that will keep going along the string gobbling digits, until it hits a non-digit and the response tells the caller what number it found and where that first non-digit is. Those are very useful for tokenizing loops as part of compilers, but my idea would break that interface too much. If that’s your variety of parser, sorry, but this post isn’t for you.

Also, I’m thinking of functions that parse as decimal. Maybe you have optional flags that allow you to specify what base to use, but it parses as decimal by default. I’m concerned only with the decimal mode of operation.

Round Numbers and “E” Notation

You might be familiar with “E” notation if you work with very large or very small floating point numbers. This is a shorthand for scientific notation where the letter E translates to “times ten to the power of”.

FloatParse("1E3");    // 1000.0
FloatParse("5E-3");   // 0.005
FloatParse("1E+100"); // One Googol.

This notation is handy for decimal round numbers. If you want to type in a billion, instead of having to count as you press the zero key on your keyboard over and over, you could instead type “1E9”. Which one of the following numbers is a billion? Can you tell at a glance?

100000000 10000000000 1000000000

The problem is that E notation is stuck in the floating-point world. I’d really like it if anywhere I could type an integer (such as in an electronic form) and I want to type a large round number, I could use E notation instead.

For that to work, the functions that convert strings to integers need to allow this.

Pinning it down

Okay, we’re all software engineers here. Let’s talk specifics.

If the string supplied to the function is of the form (mantissa)"E"(exponent), with the mantissa in the range 1-9 and the exponent from zero to however high your integer type gets, then instead of rejecting the string, return the integer value this E notation string represents.

Add the usual range checks (for example, 9E18 for a signed 64-bit integer) and do the right thing when there’s a minus sign character at the start and we’re done.

“But there might be code depending on values like that being rejected!”

That’s a fair concern. I am advocating for a change in behaviour in the standard library after all.

I am seeking only to change behaviour in the domain of inputs that would otherwise produce a rejection response.

If IntParse("1E3") used to return a rejection, but now it returns 1000, is that a bad thing? The user can already type "1000" but this time they wrote "1E3" instead. What’s the harm in carrying on as if they typed 1000 all along?

I can think of some pathological cases. Maybe the programmer wanted to limit an input to 1000, but instead of using the less-than operator on the integer like a normal person, they check that the length of the string less than 4. "1E9" would pass validation but a billion would be returned. It seems unlikely that anyone would do that in practice.

The parser function might be used not to actually use the integer returned, but instead act as a validator. You have a string and you want to know if the string is a valid sequence of decimal digits or not. If that’s what you need, the integer-parser is maybe the wrong tool for that. Parsers will already be a little flexible about the range of allowable inputs, allowing leading plusses or zero digits and commas grouping digits into triples. If you care that a string is actually the one canonical ASCII representation of a number or not, then I would follow the parse with a test converting the integer back into a string and checking it matches the input string.

“E might be a hex digit.”

Your function returns the number 7696 for the input "1E10" and not ten billion? What you’ve got there is a hex parser, not a decimal parser. E notation only make sense in the world of decimal numbers.

If your decimal parser automatically switches to hex parsing if it sees ‘A’ to ‘F’ characters, then you’ve got a parser that’s unreliable for hex number strings. A lot of hex numbers contain only the ‘0’ to ‘9’ digits. If my code gets a hex number as input, I’m going to call the hex parser. Some supposed general purpose parser isn’t going to know if "1000" should return 1000, 4096 or 8 and will need to be told.

While we’re on the subject of hex numbers, I may be following this up with a proposal that “H” should mean “times 16 to the power of” in a similar style, but that’ll be for another day.

 “Delores, I live in fear. My love for you is so overpowering. I’m afraid that I will disappear.”

“Because counting to nine is really hard”

So there’s my suggestion. In short, I’m fed up of having to count to nine when I want to type a billion and having to check by counting the little row of identical ovals on the screen. I look forward to comments telling me how wrong I am.

Picture Credits
📸 “Swift” by Tristan Ferne. (Creative Commons.)
📸 “Kibo Summit, Mount Kilimanjaro, Tanzania” by Ray in Manila. (Creative Commons.)

(Also, a billion is a one followed by nine zeros. Anyone who says it has twelve zeros is quite wrong.)

Leave a Reply

Your email address will not be published. Required fields are marked *