From: bc@freeuk.com
On 30/11/2025 00:46, Keith Thompson wrote:
> bart writes:
>> On 29/11/2025 20:24, Waldek Hebisch wrote:
>>> bart wrote:
>>>> On 24/11/2025 20:26, David Brown wrote:
>>>>> On 24/11/2025 19:35, bart wrote:
>>>>
>>>>>> But now there is this huge leap, not only to 128/256/512/1024 bits,
>>>>>> but to conceivably millions, plus the ability to specify any weird
>>>>>> type you like, like 182 bits (eg. somebody makes a typo for
>>>>>> _BitInt(128), but they silently get a viable type that happens to be a
>>>>>> little less efficient!).
>>>>>>
>>>>>
>>>>> And this huge leap also lets you have 128-bit, 256-bit, 512-bit, etc.,
>>>>
>>>> And 821 bits. This is what I don't get. Why is THAT so important?
>>>>
>>>> Why couldn't 128/256/etc have been added first, and then those funny
>>>> ones if the demand was still there?
>>>>
>>>> If the proposal had instead been simply to extend the 'u8 u16 u32 u64'
>>>> set of types by a few more entries on the right, say 'u128 u256 u512',
>>>> would anyone have been clamouring for types like 'u1187'? I doubt it.
>>>>
>>>> For sub-64-bit types on conventional hardware, I simply can't see the
>>>> point, not if they are rounded up anyway. Either have a full range-based
>>>> types like Ada, or not at all.
>>> First, _BitInt(821) (and _BitInt(1187)) are really unimportant. You
>>> simple get them as a byproduct of general rules.
>>
>> That they are allowed is the problem. People use them and expect the
>> compiler to waste its time generating bit-precise code.
>
> You are literally the only person I've seen complain about it. And you
> can avoid any such problem by not using unusual sizes in your code.
>
> You want to impose your arbitrary restrictions on the rest of us.
>
> Do you even use _BitInt types?
>
> Oh no, I can type (n + 1187), and it will yield the sum of n and 1187.
> Why would anyone want to add 1187 to an integer? The language should be
> changed (made more complicated) to forbid operations that don't make
> obvious sense!!
You seem to be mixing up values and types. Or are arguing for there to
be nearly as many integer types as possible values.
Everyone in this group seems obsessed with not having any limitations at
all in the language.
For example, gcc allows identifiers up to 4 billion characters along, or
something (I think I've tested it with three 1-billion-character variables.)
There was a discussion here about it. Of course, even million-character
names would be totally impractical to work with. I'd have trouble with
256 characters (my own cap).
The rationale for BitInts seems to be heading the same way. The work for
billion-character variables as already 'been done'. That doesn't mean
they are sensible or practical or efficient!
>> You can have general _BitInt(N) syntax and have constraints on the
>> values of N, not just an upper limit.
>
> No you can't, because the language does not allow the arbitrary
> restrictions you want. If an implementer finds _BitInt(1187)
> too difficult, they can set BITINT_MAXWIDTH to 64.
>
> One more time: Both gcc and llvm/clang have already implemented
> bit-precise types, with very large values of BITINT_MAXWIDTH.
> What actual problems has this fact caused for you, other than giving
> you something to complain about?
What problem would there be if BitInt sizes above the machine word sizes
had to be multiples of the word sizes?
It what way would it inconvenience /you/?
I just don't unlike unnecessarily flexible, lax or over-ambitious
features in a language. I think that is as much poor design as
underspecifying.
So I'm interested in what that one extra bit in a million buys you. Or
that one bit fewer.
> [...]
>
>>> _BitInt(8) makes a lot of sense for 8-bit processors. The
>>> requirement for number of bits not divisible by 8 came from
>>> requiremnt of portablity to FPGA, where hardware may use
>>> odd width.
>>
>> Wouldn't 'char' have a different width there anyway? Or can it be even
>> odder where char is 7 bits and int is 19?
>
> char is at least 8 bits wide, and the size of int must be a multiple of
> CHAR_BIT (though its width needn't be if there are padding bits).
> I don't know about C implementations for FPGAs, but I presume they
> still obey the rules of the language.
>
> [...]
>
>> Apparently _BitInt(8) is incompatible with int8_t.
>
> Yes, it is. char, signed char, and unsigned char are also incompatible
> with each other. How is that a problem?
Signed and unsigned char have ranges of -128..+127 and 0..255
respectively when they are 8 bits wide; they cannot be compatible.
But BitInt(8) also has a -128..+127 range, yet it is not compatible with
signed char or int8_t.
Why not? Under what circumstances would somebody choose BitInt(8) those
alternatives, and why?
When 'char' is signed, that means that a signed 8-bit type on PCs can
chosen amongst four incompatible types!
> They're both scalar types, so
> they're implicitly converted when needed.
> [...]
>
--- SoupGate-Win32 v1.05
* Origin: you cannot sedate... all the things you hate (1:229/2)
|