From: Keith.S.Thompson+u@gmail.com
bart writes:
> On 29/11/2025 20:24, Waldek Hebisch wrote:
>> bart wrote:
>>> On 24/11/2025 20:26, David Brown wrote:
>>>> On 24/11/2025 19:35, bart wrote:
>>>
>>>>> But now there is this huge leap, not only to 128/256/512/1024 bits,
>>>>> but to conceivably millions, plus the ability to specify any weird
>>>>> type you like, like 182 bits (eg. somebody makes a typo for
>>>>> _BitInt(128), but they silently get a viable type that happens to be a
>>>>> little less efficient!).
>>>>>
>>>>
>>>> And this huge leap also lets you have 128-bit, 256-bit, 512-bit, etc.,
>>>
>>> And 821 bits. This is what I don't get. Why is THAT so important?
>>>
>>> Why couldn't 128/256/etc have been added first, and then those funny
>>> ones if the demand was still there?
>>>
>>> If the proposal had instead been simply to extend the 'u8 u16 u32 u64'
>>> set of types by a few more entries on the right, say 'u128 u256 u512',
>>> would anyone have been clamouring for types like 'u1187'? I doubt it.
>>>
>>> For sub-64-bit types on conventional hardware, I simply can't see the
>>> point, not if they are rounded up anyway. Either have a full range-based
>>> types like Ada, or not at all.
>> First, _BitInt(821) (and _BitInt(1187)) are really unimportant. You
>> simple get them as a byproduct of general rules.
>
> That they are allowed is the problem. People use them and expect the
> compiler to waste its time generating bit-precise code.
You are literally the only person I've seen complain about it. And you
can avoid any such problem by not using unusual sizes in your code.
You want to impose your arbitrary restrictions on the rest of us.
Do you even use _BitInt types?
Oh no, I can type (n + 1187), and it will yield the sum of n and 1187.
Why would anyone want to add 1187 to an integer? The language should be
changed (made more complicated) to forbid operations that don't make
obvious sense!!
> You can have general _BitInt(N) syntax and have constraints on the
> values of N, not just an upper limit.
No you can't, because the language does not allow the arbitrary
restrictions you want. If an implementer finds _BitInt(1187)
too difficult, they can set BITINT_MAXWIDTH to 64.
One more time: Both gcc and llvm/clang have already implemented
bit-precise types, with very large values of BITINT_MAXWIDTH.
What actual problems has this fact caused for you, other than giving
you something to complain about?
[...]
>> _BitInt(8) makes a lot of sense for 8-bit processors. The
>> requirement for number of bits not divisible by 8 came from
>> requiremnt of portablity to FPGA, where hardware may use
>> odd width.
>
> Wouldn't 'char' have a different width there anyway? Or can it be even
> odder where char is 7 bits and int is 19?
char is at least 8 bits wide, and the size of int must be a multiple of
CHAR_BIT (though its width needn't be if there are padding bits).
I don't know about C implementations for FPGAs, but I presume they
still obey the rules of the language.
[...]
> Apparently _BitInt(8) is incompatible with int8_t.
Yes, it is. char, signed char, and unsigned char are also incompatible
with each other. How is that a problem? They're both scalar types, so
they're implicitly converted when needed.
[...]
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
void Void(void) { Void(); } /* The recursive call of the void */
--- SoupGate-Win32 v1.05
* Origin: you cannot sedate... all the things you hate (1:229/2)
|