C vs C++ vs Lisp (156)

1 Name: #!/usr/bin/anonymous : 2008-06-13 12:52 ID:NJsTUwig

At first, I decided to try learning C++, then I heard C was "better", now I'm hearing Lisp is good. I'm getting annoyed with indecisiveness, so I'm going to ask this one last time... Which language would be best to learn out of these three? My intentions in programming will be small projects, game mods, network applications and other things of that sort. What would you all recommend out of those three, and why? Not looking for anything outside the above three.

Please halp.

7 Name: #!/usr/bin/anonymous : 2008-06-13 18:09 ID:NJsTUwig

>>5
>>6

Thanks, that's the kind of reply I was looking for.

Onto C, then!

8 Name: #!/usr/bin/anonymous : 2008-06-14 00:23 ID:Heaven

>>7
>>2 here
Here's my reply:
Learn C, then lisp.
When you're done, and you feel quite familiar with the languages, learn C++. Or, fuck that, just learn common lisp instead of C++.
Common lisp has most of C++'s features that C doesn't have.
Paradigms, features, etc are more important than syntax.
When you're done, study something more OS-related, or networking, or encryption, or 3D, or whatever floats your boat.

9 Name: #!/usr/bin/anonymous : 2008-06-14 13:31 ID:NJsTUwig

>>8
Alright, I'll try that too. I've pretty much got all the basic-to-intermediate things down with C, so since that won't be my "main" language, I'll look at Lisp like you said.

Also, I has a question:

Some people say different languages are good for different tasks... what they do mean by that? Aren't "general purpose" languages (like C, C++, etc) good for everything, in that case?

10 Name: #!/usr/bin/anonymous : 2008-06-14 13:47 ID:Heaven

>>9
What the hell is a "general purpose language"?
there's no such term, a language can be imperative, functional, it can support various features such as safe strings, regex, whatever, namespaces, strict typing etc.
I also mentioned that in my post:

> Paradigms, features, etc are more important than syntax.

and no, you don't know C. fuck the "basic-to-intermediate".
You either know or don't know a language, and in your case you don't know C so get your ass working and learn it properly.
You have to know everything about the language. (with exception to complex numbers and that stuff, which are almost never needed)

11 Name: #!/usr/bin/anonymous : 2008-06-14 15:07 ID:NJsTUwig

>>10

Well, you didn't have to be fucking rude about it. In case you can't fucking read properly, I said C wasn't going to be my main language. I'm not going to learn a full fucking language if I'm not going to use it, asshole. Try again. Thanks.

12 Name: #!/usr/bin/anonymous : 2008-06-14 15:10 ID:NJsTUwig

13 Name: dmpk2k!hinhT6kz2E : 2008-06-14 16:51 ID:Heaven

> You have to know everything about the language.

Knowing everything about a language is a bit over the top, methinks. When was the last time someone needed to know trigraphs when programming C?

I'm also not a fan of overinvesting in a single language since they all suck in various ways. That way leads to religion because of the investment involved.

> Some people say different languages are good for different tasks... what they do mean by that?

Some languages are better at some things than others. Languages have different focuses.

Perl has great support for processing text, Erlang makes distributed computation and failover simple, the Lisps dominate metaprogramming, C is the closest thing to a portable assembler...

Let's say you need to munge some text. C is a perfectly capable language for doing that, but it will be many times easier to use Perl instead. But if you're writing performance code and twiddling bits, C is the better choice.

14 Name: #!/usr/bin/anonymous : 2008-06-22 06:23 ID:qaSVOnLE

Nothing wrong with learning all three. It is always good for a programmer to have multiple tools at his disposal.

As for order, learn C first. Then do Scheme (the clean Lisp). If you learn the latter first, C might seem like a step backward to you.

Then learn the dirty ones: Common Lisp and C++. My prejudiced judgment suggests skipping C++, but it is very heavily used in game programming.

15 Name: #!/usr/bin/anonymous : 2008-06-27 22:00 ID:Heaven

FORTH LEARN .

16 Name: #!/usr/bin/anonymous : 2008-07-01 22:11 ID:qvl50+Vg

They're all valuable, but I'm learning a Lisp first. This, in part, is why.

17 Name: #!/usr/bin/anonymous : 2008-07-04 09:53 ID:3CWa2CkL

C++ if you want to get into game programming. it really depends on what you want to achieve.

18 Name: #!/usr/bin/anonymous : 2008-07-08 20:33 ID:4yEyRNo+

>>10
I only read K&R. C99 can stick its complex numbers up its anus.

19 Name: dmpk2k!hinhT6kz2E : 2008-07-09 00:05 ID:Heaven

I have the opposite gripe: C99 might have been too much in some areas, but it was far too little overall.

What we really need is a C 2.0.

It's never going to happen though. >:(

20 Name: #!/usr/bin/anonymous : 2008-07-09 06:33 ID:wStLV0uC

>>19
C 2.0? What do you think that is missing from C?
Where was C99 too much?
I for one would really like a "typeof" macro.

21 Name: dmpk2k!NvI5dkBF.E : 2008-07-09 15:53 ID:Heaven

Things I'd love to see:

  • Null-terminated strings eliminated in favour of Pascal-style strings. In other words real strings, not an error-prone pointer hack.
  • The standard library cleaned up. Since strings know their length and are 8-bit clean, you can throw away some function and merge others. Also, function names and arguments can be made consistent.
  • Get rid of trigraphs.
  • Some form of local type inference.
  • I'd love to see computed gotos and labels-as-values become standard.

It'd still be C (other than the type inference), but it'd be a less error-prone and more pleasant language to use. The only reason I'd call it "2.0" is because the new standard library wouldn't be backward-compatible.

I'm undecided if strings should support Unicode. Unicode's the future, and it'd be nice to put unicode in string literals and have it Just Work, but have you seen the size of ICU? An is to make strings use machine words instead of bytes and leave ICU separate.

22 Name: #!/usr/bin/anonymous : 2008-07-09 15:55 ID:B3SCH2Vv

"An is to" = "An alternative is to"

23 Name: #!/usr/bin/anonymous : 2008-07-09 19:39 ID:L0awF4Kv

>>21

I'm positive there are projects working on "clean" or "safe" subsets of C; naturally, I can't remember their names, but Googling would certainly turn up something. If I encounter anything, I'll post links here.

24 Name: #!/usr/bin/anonymous : 2008-07-09 21:52 ID:m2TJrD+R

>>23

What exactly would be the point of a "clean" or "safe" subset of C, in your mind?

There's cyclone at http://www.research.att.com/viewProject.cfm?prjID=67 so I don't think you or >>21 are alone, but I also doubt cyclone's usefulness as well...

25 Name: #!/usr/bin/anonymous : 2008-07-09 21:58 ID:m2TJrD+R

>>21

Do trigraphs really hurt anyone being there?

I too would love to see computed gotos become part of the standard, but I don't worry about it much. GCC is everywhere that C matters, so I just use that.

I disagree on the pascal-style strings. You can't merge the tails of the strings, you still need all those library functions in order to deal with strings longer than 256 octets (or whatever your very-small limit would be). Trading one buggy kind of string for two buggy kinds of strings doesn't sound like a win- just another way to confuse programmers into using the wrong data structure.

What exactly do you mean by local type inference?

26 Name: dmpk2k!hinhT6kz2E : 2008-07-10 00:52 ID:Heaven

> Do trigraphs really hurt anyone being there?

They make the compiler more complicated, and nobody uses them today. C has a number of obscure corner cases that makes parsing it a lot harder than it need be. Complexity adds up; death by a thousand cuts.

> GCC is everywhere that C matters, so I just use that.

Have you seen clang? If I want to use another compiler for whatever reason, I can't rely on GCC extensions. It's a good idea so it should be in the standard, dammit. I want tail-calls too while I'm at it, so I can use C as an intermediate language for more advanced languages without the performance hit of trampolines. Portable assembly my arse.

> You can't merge the tails of the strings

Make your own datatype. If you need something obscure like this, that's why ADTs exist. Or be crazy and use a GC so you can slice strings.

Why do we suffer under a long and sordid history of buffer overflow vulnerabilities since C's inception? Null-terminated strings are a bug-prone default thanks to some performance benefit that only applies to the VAX. Not to mention that non-8-bit-clean strings just suck.

> whatever your very-small limit would be

Use a machine word instead of a byte. The string can be as large as your memory space. This is the minimun that the rest of the world uses. Hell, even the masochistic C++ people hate C's strings, although they can't completely escape them.

> local type inference

Put types in the parameter list, like normal. Locals inside the function figure out what they should be based on the signatures of the local and called functions. See what newer versions of C# and D are doing.

27 Name: #!/usr/bin/anonymous : 2008-07-10 03:21 ID:m2TJrD+R

> Use a machine word instead of a byte. The string can be as large as your memory space.

Eaah. That makes common routines like strchr (or indexOf if you prefer to call it something else) much more expensive to implement. It also wastes some memory, and means that very large strings (and bitmaps) will render the CPU caches useless because all the libraries keep bouncing back to the beginning to check the length.

Making this work in practice would mean changing the ways we presently make cpus...

>> You can't merge the tails of the strings
> Make your own datatype. If you need something obscure like this, that's why ADTs exist.

Uh, then you don't mean pascal strings at all? Or maybe you can qualify this better?

> Put types in the parameter list, like normal.

That's a huge ABI change, but one I've recently looked at. Consider the fact that you'll be doubling at a minimum, stack use on CISC systems, and wasting many registers on RISC systems. All for the "occasional" time that you actually want this information?

Tagged pointers are a much cheaper way to do it, and don't require ABI changes.

> Why do we suffer under a long and sordid history of buffer overflow vulnerabilities since C's inception?

I think this statement says more about where you're coming from, but I'll bite.

Because programmers are stupid? It seems like a good idea to try to get the language to protect the programmer from their own stupidity, but frankly programmers demonstrate their own stupidity even in safe languages- such as with sql injection or other quoting problems. I simply don't think it's possible, and in fact trying to increase the safety (without making it absolute) just serves to make the programmer think they can get away with more.

A better solution is to make the wrong way to programs generate wildly inaccurate results. That means designing APIs that fail quickly- such as wiping memory during free(), or padding buffers with a printable character instead of NUL.

Some of your users will complain about your APIs being hard to use and that they "just can't figure it out", but really, this is why secure programs are so hard to find; so many people "just figuring it out", instead of actually writing secure software.

> Have you seen clang?

Yes. It supports many GCC extensions and promises the support of more in the future.

Being as how it's very immature and it's code generation still sucks awfully right now, I don't see why you'd want to contort yourself for it.

28 Name: 20 : 2008-07-10 08:15 ID:wStLV0uC

>>21
Sorry for taking so long to reply.

> Things I'd love to see:
> * Null-terminated strings eliminated in favour of Pascal-style strings. In other words real strings, not an error-prone pointer hack.

There are open source standard C libraries for that. Why don't you use these? I disagree with you in this.
http://bstring.sourceforge.net/

> * The standard library cleaned up. Since strings know their length and are 8-bit clean, you can throw away some function and merge others. Also, function names and arguments can be made consistent.

strings know their length? I don't know what that means. Unless you take point 1 as implemented, but I disagree with point 1. I have to disagree with point 2 as well.
strings are not 8-bit clean. CHAR_BIT in ISO 9899:1999 is required to be at least 8, but it can have a value greater than that. POSIX.1-2001 guarantees CHAR_BIT to equal 8.

> * Get rid of trigraphs.

I disagree. They are not that annoying.

> * Some form of local type inference.

What exactly is local type inference? Can you give a C example of what you have in mind? I did google it, but the results I got from wikipedia were disappointing.

> * I'd love to see computed gotos and labels-as-values become standard.

Once more I do not understand. What makes you think you can not have a "computed goto" in C? What exactly do you have in mind? Example again please.

> It'd still be C (other than the type inference), but it'd be a less error-prone and more pleasant language to use. The only reason I'd call it "2.0" is because the new standard library wouldn't be backward-compatible.

Your only issue as I see is bstring. Use bstring then. For me strings were never an issue, but C was my first language. Were do you come from?

> I'm undecided if strings should support Unicode. Unicode's the future, and it'd be nice to put unicode in string literals and have it Just Work, but have you seen the size of ICU? An is to make strings use machine words instead of bytes and leave ICU separate.

string literals do "support" unicode.

29 Name: dmpk2k!hinhT6kz2E : 2008-07-10 08:16 ID:Heaven

> That makes common routines like strchr much more expensive to implement

On old machines the opposite was the case (use (e)cx and loop to forgo one of the explicit comparisons). On modern machines it makes little difference.

My assembly is horribly rusty, but it's something like:

next: mov bl, byte ptr [edi]   ; load character from string 
cmp bl, al ; al contains the character we're looking for
jz found ; first conditional branch
inc edi
cmp bl, 0 ; check for null termination
jnz next ; second conditional branch
jmp not_found

versus

next: mov bl, byte ptr [edi]
cmp bl, al
jz found
inc edi
dec ecx ; instead of cmp we have dec, which will set zero flag
jnz next
jmp not_found

They're both the same except for one instruction.

However, Pascal-style strings makes strlen much faster, and by extension any string function that allocates a new string.

I think this is a tangent though. I'm a fan of getting it correct, then getting it fast. C's long history of buffer-overflows is argument enough for me against C-style strings. The various strn* implementations are a band-aid to the real problem.

> It also wastes some memory

I don't think that's relevant on modern machines. If you're working with many millions of very short strings then there's always an ADT.

> means that very large strings (and bitmaps) will render the CPU caches useless because all the libraries keep bouncing back to the beginning to check the length

But that's the point of a cache: it stores hot data. There are a few normal cases:

  • you follow a pointer to a string: no difference since you're loading the cache line containing the beginning of the string either way
  • often randomly access a short string: no difference since the cache lines remain hot
  • often randomly access a long string: little difference since the performance is going to suck regardless; the CPU can't prefetch cache lines since you have no stride. In face the hottest piece of data will be your string's length
  • often start scanning somewhere after the beginning of a string: this would keep the cache line containing the length hot as well. You might have to worry about false sharing here.
  • rarely doing any of the above: since you're rarely doing it, it's not in cache. But since you're rarely doing it, it makes little performance impact anyway.

There might one bad performance case: you have a huge number of strings longer than a cache line where you mostly linearly access the end of each string, but only rarely per string, and most of the time of your program is spent accessing these huge number of strings. Other than being rare, this might be offset by out-of-order execution (I'm not sure).

I'm fairly confident that for every catastrophic performance case due to cache effects that someone can raise against Pascal-style strings, I can do the same for C-style; it'd be a race between microoptimization versus algorithm.

> Or maybe you can qualify this better

A struct with a length and an array of bytes. I should have said Pascal-style, not Pascal.

> Consider the fact that you'll be doubling at a minimum, stack use on CISC systems, and wasting many registers on RISC systems

I believe you're of thinking something different. If you're referring to dynamic typing, this isn't it; types are all handled at compile time. The machine instructions generated are identical to what you'd get with manifest typing (the stuff in C/C++/Java/Pascal/etc), but there's less typing for the meatbag between the keyboard and chair.

As an aside, dynamic typing doesn't have to increase memory if you're willing to reserve bits on data words themselves. It's a tradeoff between space and time.

30 Name: 20 : 2008-07-10 08:16 ID:wStLV0uC

>>26

> Put types in the parameter list, like normal. Locals inside the function figure out what they should be based on the signatures of the local and called functions. See what newer versions of C# and D are doing.

Is it similar to what I asked for? A typeof keyword?

31 Name: dmpk2k!hinhT6kz2E : 2008-07-10 08:16 ID:Heaven

continued...

> It seems like a good idea to try to get the language to protect the programmer from their own stupidity, but frankly programmers demonstrate their own stupidity even in safe languages- such as with sql injection or other quoting problems

Of course, but note that with C you can have both buffer overflows and SQL injection attacks. With higher-level languages it's only SQL injection attacks. Some make it very difficult to do even the injections attack due to string tainting.

Part of my job involves security (can you guess?), so I'd really appreciate fewer potential attack vectors. Let's put it this way:

Let's say you have two boxes:

  • one has 30 ports open
  • one has two

You can argue that if the software handling each port was written properly it wouldn't be a problem, but do you really want to rely on that? What if your job depends on it? What if your whole business relies on it?

I hope you see the parallel here. One box has fewer potential points of attack. Likewise, a language can have fewer points of attack.

> A better solution is to make the wrong way to programs generate wildly inaccurate results.

Or make it difficult to do it the wrong way at all.

Let's look at an example from C and D. What is the value of:

int c;

In C it's whatever was already at that location. Let's say it's a local variable on the stack. Then the value is probably from a chain of functions called previously by a parent function.

In D it's 0. Always. It's explicity initialized this way.

I've had this problem with C bite me in the ass a number of times in the past, and bugs caused by forgetting to initialize can be hard to track down without static analysis. So I appreciate this.

But wait! What if you really need every bit of performance, you're sure your code is correct, and you're willing to bypass this safely valve provided by D?

No problem!

int c = void;

You're now back to C's behaviour. As an added bonus, if another programmer comes along later they can be sure that, yes, not initializing it was intentional. It's not that you forgot.

I don't particularly like D, but it was designed by a former aircraft engineer. Some of the design decisions visibly reflect that.

> I don't see why you'd want to contort yourself for it.

Yes. But I'll want to one day. Or maybe PCC. Who knows.

I'm not a fan of barriers to portability. The Lisp world is having problems right now because the Common Lisp standard hasn't evolved to match the times. Let's not do that to C too.

32 Name: dmpk2k!hinhT6kz2E : 2008-07-10 08:17 ID:Heaven

Can we have a longer post limit? D:

33 Name: dmpk2k!hinhT6kz2E : 2008-07-10 08:37 ID:Heaven

>>28

> Why don't you use these?

Because character literals are still null-terminated, so now we need two sets of string functions instead of one. That's a recipe for bugs galore.

This is why the standard library should be revamped: let's make something like bstring the standard and chuck out the current mess of fail.

> They are not that annoying.

They're useless.

> What exactly is local type inference?

Here's how we do it now:

int foo( int x )
{
int bar = x;
}

Here's how we could do it:

int foo( int x )
{
var bar = x;
}

The compiler figures out that var should be int, so you no longer have to provide the type for each local variable when you define it. It's a bit like a primitive form of Damas-Hindley-Milner type-inference in Ocaml or Haskell that only applies within a function.

Looks silly in this trivial example, but it's really nice anything bigger.

As mentioned in the other posts I just made, C# 3.0 and D both make use of it. So will C++0x.

> string literals do "support" unicode.

UTF-8. Now let's say I want to index or concatenate that. Whoops.

34 Name: dmpk2k!hinhT6kz2E : 2008-07-10 08:41 ID:Heaven

Sorry, I missed this:

> What makes you think you can not have a "computed goto" in C?

It can. GCC does.

I'd like every C compiler to support it.

35 Name: 20 : 2008-07-10 09:15 ID:wStLV0uC

>>33
So it's actually what I asked for, typeof().
Though what you ask for is only part of what I ask for, not only that, but what you ask for is almost useless.

> This is why the standard library should be revamped: let's make something like bstring the standard and chuck out the current mess of fail.

I disagree. bstring for ME and MY projects is bloat. Do you honestly think bstring can run efficiently without you noticing in a embedded system? Where most of C programming is now?

> UTF-8. Now let's say I want to index or concatenate that. Whoops.

I think you lack C skills. what the hell are you talking about?
man wcscat.
You can use [] to index.

36 Name: #!/usr/bin/anonymous : 2008-07-10 12:12 ID:aNYkIZ6Y

> I don't think that's relevant on modern machines.

Well you're wrong; I just measured it. On large strings (more than a page big) the cost is significant. On modern systems that means flushing the CPU cache and more page faults. On PIC it would mean no C at all (or at least, no strings).

Go ahead and use Java or D or whatever if you think the costs don't matter.

> Part of my job involves security
> I've had this problem with C bite me in the ass a number of times in the past,

You're an inexperienced C programmer and you're working on security systems. No wonder you think it's a good idea.

You think that the only possibility is being more careful, and having the computer assist you at that (because clearly, being careful is hard).

Of course, the reality is that you shouldn't be writing C code because when you do it, you get security vulnerabilities. You just minimize the amount of C and hope for the odds, right?

When I write C, I don't get security vulnerabilities. That's not bragging, and it's not because I'm much more careful with bounds checking. I even use strcpy().

Real security comes from knowing exactly what the interactions are at the important moments, and failing fast. Compiler hiding makes that difficult, which is why I use C. Using Java or Perl would remove bounds checks, but it'd require knowledge about their innards instead of just the operating system itself.

I would, however like to see O_TRUNC go away. Maybe it should be renamed O_TRUNC_THIS_DOESNT_DO_WHAT_YOU_THINK_IT_DOES would deter people...

> The Lisp world is having problems right now because the Common Lisp standard hasn't evolved to match the times.

Uh no. The lisp world is perpetually having problems because using lisp effectively requires a much higher skill level than the populous has.

>> What exactly is local type inference?
> example clipped

Having a "var" keyword figure out the type of things is a bad idea. And no, it isn't even remotely DHM-typing; a simple string/symbol replacement is what it is. You can use typeof() for this right now if you want to try it out and see how awful it is.

37 Name: dmpk2k!hinhT6kz2E : 2008-07-10 16:05 ID:Heaven

> what you ask for is almost useless.

Why?

> man wcscat.

Yes, wchar_t. That's not UTF-8, which you can at least write legibly in literals. How do you plan to embed wide characters in a string literal? Give me an example of a wide string literal that contains: 双葉ちゃん

> You can use [] to index.

For wchat_t. Not for UTF-8. We keep coming back to problems with literals. And of course wide strings still aren't clean; they still have a null terminator.

> Do you honestly think bstring can run efficiently without you noticing in a embedded system?

Then use C 1.0 on your system. Let's just hold back the closest thing there is to portable assembly for that.

May I point out that embedded systems are pretty beefy today? Unless you're programming on a HC11 you have more computing power and memory than on my old 286, where Pascal did fine. This will increasingly be the case.

What platform are you writing to?

> I just measured it

What are you waiting for? The source for your test, please!

> You're an inexperienced C programmer and you're working on security systems. No wonder you think it's a good idea.

First, I don't work on C anymore. Not at work anyway. I would appreciate if you do not make any claims about my skill. Other than being crass, you don't know me.

My arguments will stand on their own merits.

> You think that the only possibility is being more careful

A strawman. I'm well aware that security is layered. But such simple buffer overflows are just one more pointless source of vulnerabilities, as have been demonstrates thousands upon thousands of time. We come back to my box with open ports analogy; there are local escalation exploits aplenty.

> And no, it isn't even remotely DHM-typing

I never said it was. Please read my post more carefully:

> It's a bit like a primitive form

Note the like.

38 Name: #!/usr/bin/anonymous : 2008-07-10 17:50 ID:m2TJrD+R

>>35

wchar_t is worse than useless.

>>37

You use L"string" to make wchar_t literals, but don't do that.

> What are you waiting for? The source for your test, please!

Are you serious?

struct str { int len; char s[1]; };
struct str *m;
int i, n;
for (n = 0; n < 4096*1024; n += 1024) {
m = malloc(sizeof(struct str)+n);
for (i = 0; i < n; i++) m->s[i] = 0x80|(n&127);
m->len = i;
/* flush cache here */
fun(m);
}

Make sure your compiler doesn't cheat so you're actually measuring cache fetches. I used GNU lightning to flush the cpu cache before continuing.

Then play with passing (volatile)i versus access to (volatile)m->len to fun() so you can compare. I used something like this:

void fun2(const char *s, int len) {
int i;
for (i = 0; i < len; i++) {
if (s[i] == 0) break;
}
}
void fun(struct str *x) {
fun2((const char *)x->s, (volatile int)x->len);
}

This is entirely repeatable. You'll note that if you instead do something like this:

void fun(struct str *x) {
int i;
for (i = 0; i < x->len; i++) {
if (x->s[i] == 0) break;
}
}

it's a lot slower because every iteration loop can cause a page fault. Obviously you'll need to preallocate however much memory is in your system to force it to swap.

If your results don't match, try increasing the number of cycles, or increasing the load on your system. The figures I posted above give me a difference of greater than 10 seconds.

> I would appreciate if you do not make any claims about my skill. My arguments will stand on their own merits.

You brought up your own skill level as being relevant when you brought up security. Don't do that.

Your argument was that safety was important for security and security is important to you because it's a big part of what you do. You seem to believe that inexperienced programmers can be careful enough to write secure systems if they get enough help from the compiler.

I think that's retarded.

> I never said it was [DHM typing].

No, you said it was like it. It isn't. It's not even close.

C types aren't. They're simply convenient accessors. Accessing using "var" is pointless because "var" doesn't specify the size of the type. Morphing the type as the value changes as C# does isn't C.

> May I point out that embedded systems are pretty beefy today?

No you may not. I still use embedded systems with memory measured in bytes. A C 2.0 should be able to replace a C 1.0 if you're trying to correct defects in C 1.0.

On the other hand, if you're just singing wishes, you might as well get behind something like D or C++- something without any possibility of overtaking C.

39 Name: #!/usr/bin/anonymous : 2008-07-10 17:50 ID:Heaven

ugh, wakaba fucked up my formatting.

40 Name: dmpk2k!hinhT6kz2E : 2008-07-10 18:39 ID:Heaven

> Are you serious?

Of course. :)

Thanks for providing the code. I'll poke at it when I get home. At the very least I'll learn something new.

> You brought up your own skill level as being relevant when you brought up security.

It wasn't meant that way. Rather, it's why I'm so concerned about buffer overflows. All the software we use here is either written in C, uses a C library, or is built on C. Most of the security advisories we've had to worry about were related to C's handling of strings.

> Accessing using "var" is pointless because "var" doesn't specify the size of the type.

No, but the compiler knows what var should be. I'm not sure I see the problem here. Can you give an example?

This is what we have now:

int foo( int x, unsigned char y, char *z )
{
int a = x;
unsigned char b = y;
char *c = z;
}

This is what I'd like, for no other reason than it makes code visually less cluttered:

int foo( int x, unsigned char y, char *z )
{
var a = x;
var b = y;
var c = z;
}

This is entirely at compile time. The types do not change. As you said, it's a bit like string substitution. So what am I missing?

As a bonus, if you change the type signature you don't need to worry as much about local types (e.g. going from assigning from a signed int to a signed int, to assigning from an unsigned int to a signed int because you forgot to change something in the body).

> Morphing the type as the value changes as C# does isn't C.

At compile time? Run time?

Why shouldn't C be able to do something like this at compiler time? What problems will it cause?

> I still use embedded systems with memory measured in bytes.

Like an HC11. It has 256 bytes of internal RAM.

I realize this is a brush-off, but I think assembly or Forth is more suitable for a device that is so constrained.

Okay, so your machine has very little memory. Chances are its word is 16 bits. This amounts to an eight extra bits per string, which is mostly in ROM. If you really can't afford that overhead in RAM... ADT.

41 Name: dmpk2k!hinhT6kz2E : 2008-07-10 18:59 ID:Heaven

Sorry, I missed this:

> You seem to believe that inexperienced programmers can be careful enough to write secure systems if they get enough help from the compiler.

Not at all.

I think it makes a class of possible vulnerabilities a lot less likely. It can't prevent them. It won't do anything else about the myriad other possible problems you get with a low-level language. But it'll dramatically reduce a common class of problems.

It's because I don't want to rely on fallible developer discipline that I believe we should have a safer default. Developers will make mistakes, so let's give them fewer chances to make a catastrophic one.

42 Name: #!/usr/bin/anonymous : 2008-07-10 19:35 ID:m2TJrD+R

> It wasn't meant that way. Rather, it's why I'm so concerned about buffer overflows. All the software we use here is either written in C, uses a C library, or is built on C. Most of the security advisories we've had to worry about were related to C's handling of strings.

Then I apologize!

I simply disagree; I think it has to do with the irresponsible handling of strings. If you'll note, that can easily include injection and quoting attacks as well.

> At compile time? Run time?
> Why shouldn't C be able to do something like this at compiler time? What problems will it cause?

I'm talking about compile time as well; it's a readability thing. When I do audits, I use ctags to track the sizes of the accessors when checking for pointer escapes (off-by-one errors, etc). If I see this:

do { char *i; /* 50 lines later */ something-with-i } ...
do { int i; /* 50 lines later */ ...

then it's hard to tell what the type of "i" is by looking at it.

I personally use very short functions, but I frequently have to look at code like this. var would work exactly in this way except I would have to follow the assignment to see what its type is.

Compilers can do amazing things, and the cost (as you've noted) isn't always in run-time cycles.

But as I said, you can try using var right now with some macros. Try something like this:

#define var(x,y) typeof(y) x=y;

to see how you like it. It'll be easier to talk about what a good idea it is when you've tried it out in real code for a while.

I find it very difficult to follow, and it seems obvious to me that this would be a good way to hide complexities and costs from the programmer. Something that I think contributes to a lot of defects and security bugs in the first place.

> I realize this is a brush-off, but I think assembly or Forth is more suitable for a device that is so constrained.

I agree, most of the time. Unfortunately, I have customers afraid of forth...

> I think it makes a class of possible vulnerabilities a lot less likely. It can't prevent them. It won't do anything else about the myriad other possible problems you get with a low-level language. But it'll dramatically reduce a common class of problems.

Trading one kind of problem for another isn't really winning; if programmers learn that s[-1] contains the length of a string, you'll start seeing code like this: fill_buffer(s,0,s[-1]); and while fill_buffer will certainly be able to check that length <= s[-1], it won't know that this is wrong.

43 Name: 20 : 2008-07-11 09:14 ID:Heaven

>>37
If you are going to reply to two or more people please put their number quoted like >> N.

> Yes, wchar_t. That's not UTF-8, which you can at least write legibly in literals. How do you plan to embed wide characters in a string literal? Give me an example of a wide string literal that contains: 双葉ちゃん

const wchar_t *foo = L"双葉ちゃん";

AGAIN: what the HELL are you talking about?
By the way, yes I'm convinced that you are not a skilled C programmer, because of your wrong terminology. A skilled C programmer would have read the standard. I don't know whether you're good at what you do, and I won't guess. But you're not a C expert.

44 Name: #!/usr/bin/anonymous : 2008-07-13 16:16 ID:n8tbYr0o

man wc scat

45 Name: dmpk2k!hinhT6kz2E : 2008-07-13 18:38 ID:Heaven

> what the HELL are you talking about?

Something I'm wrong about. Why ask the rhetorical question?

> But you're not a C expert.

I never said I was; part of the reason I start language flamewars is they're an entertaining way to find gaps in my knowledge.

Anyway, here's another question: wchar_t is two bytes, not four, on Windows. Unicode needs 21 bits to represent without surrogate pairs. How will the indexing work now?

46 Name: dmpk2k!hinhT6kz2E : 2008-07-13 20:22 ID:Heaven

>>42
I just bumped into this: http://erikengbrecht.blogspot.com/2008/07/love-hate-and-type-inference.html

Parts of it match up with your arguments against type inference.

47 Name: 42 : 2008-07-13 23:08 ID:aNYkIZ6Y

>>46

Interesting, though note that I'm not against type inference per-se, but think inferring a particular thing that is also called a type a bad idea.

SBCL's type inference has found a significant number of bugs in my own code- especially in code paths that I wouldn't otherwise test heavily.

48 Name: 42 : 2008-07-13 23:09 ID:aNYkIZ6Y

I did a quick search, and found myself doing a lot of this:

if (strchr(s+1,'#')) { ... }

Your C2.0 with a native string type would have be write this:

if (strchr(s,'#',1)) { ... }

Not necessarily a small change, but certainly not a reduction in arguments. Things like strncmp() would double in argument-count- can you imagine calling a string-function with 5 arguments and not finding an error?

49 Name: dmpk2k!hinhT6kz2E : 2008-07-14 00:44 ID:Heaven

> Things like strncmp() would double in argument-count-

I was thinking you're only pass the strings, and the strncmp() would internally access the length of each string. So it'd be a variadic function requiring two string arguments and the optional third length argument if you want to compare less than the full lengths.

50 Name: dmpk2k!hinhT6kz2E : 2008-07-14 00:56 ID:Heaven

> think inferring a particular thing that is also called a type a bad idea.

What if the inference information was made available to the editor? One much-touted benefits of clang is all the compile-time information made easily accessible to the rest of the world.

A clang or GCC plugin could emit a ctags index file with the inferred types.

51 Name: #!/usr/bin/anonymous : 2008-07-14 02:58 ID:aNYkIZ6Y

>>49 What if you want to start from offset=1 for string "a", and offset=2 for string "b"?

You clearly need four arguments strcmp() - which means your length would be number five.

52 Name: #!/usr/bin/anonymous : 2008-07-14 03:03 ID:aNYkIZ6Y

>>50 What if you try it for a month? Or convert an existing application to it?

As I already pointed out, you can try using var as a macro right now, and then you'll be in a better position to talk about it.

I think it's unnecessary at best, and costly at worse. I think it's a bit early to defend it with editor support when it's not even clear there's a benefit to it.

53 Name: 20 : 2008-07-14 14:25 ID:Heaven

>>48
that's the stupidest request ever.

#define mystrchr(a,b,c) strchr((a)+(c), (b))

If they change strchr now, all existing code will break.
the idea is plain stupid.

>>52
typeof can be benifical for macros. It would be possible to write a swap macro, for example.

54 Name: 42,48,52 : 2008-07-14 15:49 ID:Heaven

>>53 I agree, but I'm playing along: dmpk2k seems genuinely interested in this thought experiment. He started with "this is obviously a good idea", and has come back from that somewhat.

I'm sure you can interact on this subject without hyperbole.

55 Name: dmpk2k!hinhT6kz2E : 2008-07-16 04:56 ID:Heaven

It's hard to hold a position when there's good evidence against it. :)

In any case, food for thought.

56 Name: #!/usr/bin/anonymous : 2008-07-16 18:04 ID:Heaven

>>55

You don't watch the news, do you?

57 Name: #!/usr/bin/anonymous : 2008-07-19 20:36 ID:Heaven

>>56
I don't believe the people to whom I believe you're making reference are as well-educated and rationally-thinking as the average /code/ regular.

58 Name: #!/usr/bin/anonymous : 2008-07-24 00:50 ID:Heaven

>>57 is better than those people because he's a programmer.

59 Name: #!/usr/bin/anonymous : 2008-07-27 09:45 ID:o893+jN4

>>49
Variadric functions need to be able to figure out how many arguments there are.

So a variadric strncmp would need 3 args for no length, or 4 for a max length number.

That's hardly an advantage over strcmp with 2 and strncmp with 3

Anyway, enough fail. Too many people want to add generic types to C, basically a typeof(void*) that returns what struct it is.

But then the usual way is to simply have the first field of the struct be an id int or string, and you cast it to a struct like that and get the id and then cast appropriately.

It's the thing that people forget. You just do it yourself in C.

Don't like C's strings? Then do some yourself... I've written my own "buffer" library, called dynbuf, which I use in my webserver.

Here's some of the header:

typedef struct
{
char *data;
size_t size;
size_t start_offset;
size_t end_offset;
} dynbuf;

dynbuf *dynbuf_create (size_t initial_size);

void dynbuf_append (dynbuf * db, const char *data, size_t size);
void dynbuf_append_from_file (dynbuf * db, FILE ** file, size_t read_size);

/* Successive calls will consume more and more of the buffer. The returned string points into the buffer, and is good until the buffer is fully consumed.*/
char *dynbuf_gets (dynbuf * db);

void dynbuf_grow (dynbuf * db, size_t size);
size_t dynbuf_length (dynbuf * db);

/* Return the data pointer+start_offset. The buffer will grow before it fills and it will always be Null terminated.*/
char *dynbuf_show (dynbuf * db);

/* Om nom nom precious data bytes I must eat them. */
void dynbuf_consume (dynbuf * db, size_t size);

void dynbuf_destroy (dynbuf * db);

This is what C is about, not complaining that someone didn't do 2 seconds of work for you already, and then force you to use it by making it the standard part of the language that all std functions use.

60 Name: #!/usr/bin/anonymous : 2008-07-27 09:52 ID:o893+jN4

>>59
Oh, and FILE ** file because it will close it and set your copy to NULL if it reaches the end. The suits the way that I use it better.

And then there's a less control-freak append_from_descriptor function which I use for sockets mainly, etc.

So this is normal C programming, and there will NEVER be buffer overflows!

The only downside is that data is never shuffled, no in-memory-copies.

So If you have a long running buffer, and are using it as some kind of FIFO pipe or something, and it is never fully consumed, then it's size is just going to grow and grow and grow. And it never realloc's down, only up.

This is because it never starts from the begining unless it is "empty", obviously. But then I don't use it for stupid things, because as a programmer I use the best tool for the job. Simple.

61 Name: dmpk2k!hinhT6kz2E : 2008-07-27 21:29 ID:Heaven

> This is what C is about, not complaining that someone didn't do 2 seconds of work for you already

Dear Anonymous,

Arguments for programmer discipline apply to PHP too, with the resulting never-ending stream of SQL injection attacks. By comparison, it's rarely seen in Python or Perl. PHP advocates tend to advance the same argument: if you don't like it, build it yourself.

Of course I can write my own string handling, although it'll take much longer than two seconds since I'm not you, but that's really not the problem, is it?

Also, please read a thread before feeling the urge to add to it. Someone else covered that already a couple weeks ago, and in a much nicer manner.

62 Name: #!/usr/bin/anonymous : 2008-07-28 07:23 ID:o893+jN4

>Arguments for programmer discipline apply to PHP too, with the resulting never-ending stream of SQL injection attacks.

Isn't that a different issue though? That sounds like people are not planning ahead and just coming up with an ad hoc design as they go along, and failing to consider everything as they do so.

So they aren't really building anything at all, just screwing around, perhaps to explore and learn or what have you. Now if they hand in the results of screwing around as a finished product then lol. Of course.

Anyway I think that sometimes people blur the difference between a language and a framework too much. Several languages come with a framework of sorts, as a convenience. Just like in Java you don't have to use their ADT implementations or their GUI classes, you can just roll your own (but you'll need to link some C in there to get to openGL or GDI or whatever you use to present your own GUI to the user).

If a language is going to include a framework as a part of the actual languages standard specification, then there are 2 ways to do it. PHP is half-arsed pile of crud, as it has been incrementally expanded over the years.

One way to go is to not force or expect anything of the programmer, because you don't want to force them to do something that they otherwise wouldn't do just to use the implementation of the language, and the other is to force them but try to make it useful / not ACTUALLY in the way.

This second means that they have to target some specific kind of area / audience, and if they miss the mark then it fails.

A proper programming language would not be aimed for anyone specific area unless doing so would not hinder any other aspects of it, otherwise it isn't general, or will rub some people the wrong way.

If you pay attention to what you are actually typing, C is this language. But it doesn't come with a whole lot so you'll be writing a LOT of stuff yourself.

Now if you aren't the kind that writes stuff yourself then you'll be using other peoples libraries, in which case you might as well be using a different language that provides such things anyway, if you feel like it. But remember that chances are, these things that it provides are themselves wrappers around a C library.

As a side note, this is the reason why the VAST majority of useful libraries are coded in C, not C++.

63 Name: #!/usr/bin/anonymous : 2008-07-28 11:10 ID:Heaven

>>59
What a fucking idiot you are. What the hell is a variadric function? VARIADRIC?!
>>60
Load of bollocks
>>61
The difference is that there's already good libraries available for C, while the whole PHP thing is a load of bollocks.

64 Name: #!/usr/bin/anonymous : 2008-07-28 13:15 ID:gIhG/SAG

>>61

>By comparison, it's rarely seen in Python or Perl.

Someone hasn't been on the Internet for very long.

65 Name: #!/usr/bin/anonymous : 2008-07-28 13:26 ID:Heaven

>>64

> By comparison

66 Name: #!/usr/bin/anonymous : 2008-07-28 16:41 ID:UdN9qNDr

>>65
That doesn't mean what you think it means.
Specifically, it doesn't mean ``relatively''.

67 Name: dmpk2k!hinhT6kz2E : 2008-07-28 20:23 ID:Heaven

Then what did I mean?

"Rare" is always relative to something.

Think a moment before dragging out the pedant hammer.

68 Name: #!/usr/bin/anonymous : 2008-07-28 21:21 ID:m2TJrD+R

>>63

It's a typo for "variadic" - >>59 means that functions with a variable arity would then need to know how many arguments were actually pushed on the stack which changes the ABI significantly.

69 Name: #!/usr/bin/anonymous : 2008-07-28 23:19 ID:L+fWfNkO

Ohhh C++.

If you go with C++ then you will most likley being using the object oriented features of it.

Then it will confuse you as to why variables that hold objects are always the value of that object. In an OO language the variable should be a reference to the instance of the object, something C++ doesn't do. You have to do that yourself. And if you do things right, you will be jumping through these hoops a lot.

C++ fails at the model its most used for in the most far reacing and basic way.

70 Name: #!/usr/bin/anonymous : 2008-07-29 11:27 ID:Heaven

>>68
It's not about 'stack'.
C doesn't have a stack. It is required by the function that it knows the number of arguments.
So such code is possible:

int f(size_t argnum, ...)

It really has nothing to do with a 'stack'. You either talk about a specific machine or the ISO standard.

>>69

> Then it will confuse you as to why variables that hold objects are always the value of that object.

Do you mean... references?

> C++ fails at the model its most used for in the most far reacing and basic way.

C++ indeed fails for its size and complexity.
OOP fails for its design. OOP FAILS HARD.

71 Name: #!/usr/bin/anonymous : 2008-07-29 16:07 ID:2q0SLCR7

>>70

C++ fails at OOP.

Having reference variables (variables that always hold a reference to an object) makes sense. Most modern OOP languages do that. C++ does not. You can certainly get a reference to an object, but C++'s default variable behavior is not to automatically provide the reference as it should.

72 Name: #!/usr/bin/anonymous : 2008-07-29 16:34 ID:Heaven

>>71
what behavior? oh god, you're another fucking idiot.
It's like saying C should automatically provide pointers to objects. Are you too lazy to type *?

73 Name: #!/usr/bin/anonymous : 2008-07-30 05:16 ID:2q0SLCR7

>>72

No, you are the idiot for not thinking through the ramifications of what I have said.

C shouldn't automatically supply pointers to objects. C++ should but can't because its more than OOP. This flaw makes it not well suited to OOP but that is the main programming it is used for.

If you pass an object to a function then it shouldn't be the value of the parameter by default. It should be a reference to the object.

If you don't understand why you would always want to work with a reference, then you don't know OOP.

This is why C++ fails, because it doesn't enforce a basic tenant of OOP it just accommodates it with extra syntax.

74 Name: #!/usr/bin/anonymous : 2008-07-30 07:03 ID:Heaven

>C shouldn't automatically supply pointers to objects. C++ should

That would make the two languages more inconsistent and more confusing to use.

>If you pass an object to a function then it shouldn't be the value of the parameter by default. It should be a reference to the object

...and what syntax would be necessary should one want the value? The indirection *? Reference parameters in functions are confusing enough, let's not add more "features" to an already excessive set.

75 Name: #!/usr/bin/anonymous : 2008-08-07 20:25 ID:MvEfLX4D

>>70
You're right in that C does not specify that parameters go on a stack. But that's where it ends. In common C ABI specifications, register-based parameter passing conventions (like with amd64, powerpc and sparc) behave exactly the same way, i.e. the caller manages the parameter stack.

And for most people, the ABI is a part of the target they are programming for. Thus practically inseparable from C the language.

76 Name: wat!Lca2LJuYUU : 2008-08-07 23:09 ID:wQcybZdK

>>1 here. I thought this topic would be dead, by now. Anyway, as someone suggested, I stayed on C for a while. I've learned quite a bit, but the problem is I don't know what to do next. Are there some fun libraries to poke around with? If so, recommendations would be nice.

Also, book recommendations for CL and Lisp would be awesome too. :D

77 Name: #!/usr/bin/anonymous : 2008-08-08 14:25 ID:Heaven

>>75
Fucking bullshit. And for most people? Citation please. What do you mean most people? Most people don't know C.

>>76
Learn C well. SDL is fun.
gigamonkeys for common lisp.

78 Name: #!/usr/bin/anonymous : 2008-08-08 16:07 ID:Heaven

>>77

>And for most people? Citation please. What do you mean most people? Most people don't know C.

Pedant. You know precisely what I mean. And I do not see any counterarguments coming from you.

Are you perhaps one of those people who, contrary to readily available evidence, believe that it is impossible to write a working program in C?

79 Name: #!/usr/bin/anonymous : 2008-08-09 05:59 ID:Heaven

>>78
I'm one of those people that know C well.
C does not have a stack. Any other information is not related or inseparable from/with C.

80 Name: #!/usr/bin/anonymous : 2008-08-09 16:57 ID:MvEfLX4D

>>79
You are also an insufferable pedant, and the sort of person for whom it is of paramount importance to always be right. Regardless of what this does to the general usefulness of the conversation at hand. Fuck you.

Indeed, the C standard does not specify a "stack" for the passing of parameters. This is very much true. As with many things, the C standard doesn't specify the down-and-dirty method of implementation for e.g. automatic variables, alloca() or varargs functions. However, can you present a mechanism that achieves the requirements of the C standard with regard to parameter passing, automatic variables and varargs functions via some mechanism that is not a parameter stack?

Thus, as is usual for the C standard, it stays just the minimum amount on the side of not specifying a stack-based mechanism. I claim that a future C standard, were it to specify a stack-based mechanism, would differ from the current standard only in its explicit use of the word "stack".

For all intents and purposes, the conventions of the target architecture that actual people program for are inseparable from the language as it is seen by the programmer. Thus the average C programmer does not give a rat's ass whether the standard specifies a stack for yada-yada or not: practically every implementation of C manages frames on the stack, alloca()s memory on the stack and passes parameters for varargs functions on the stack.

81 Name: #!/usr/bin/anonymous : 2008-08-10 01:02 ID:Heaven

>>80 Sure! I can be pedantic too!

ZetaC didn't use a parameter stack for arguments. It used a heap.

However an "expert C programmer" familiar with the kind of C you see on unix-like ABIs (Windows, Linux, MacOS, and so on) would have lots of problems with ZetaC- which although strictly conforming to C's specification, did strange things in order to be useful to the surrounding lispm.

82 Name: #!/usr/bin/anonymous : 2008-08-11 04:23 ID:AXzcaI+g

>>74

>That would make the two languages more inconsistent and more confusing to use.

Yes it would, so given C++'s OOP usage I say its fundamentally flawed.

>...and what syntax would be necessary should one want the value? The indirection *? Reference parameters in functions are confusing enough, let's not add more "features" to an already excessive set.

You are not getting what I am saying. There should be 0 syntax for the default of passing objects as parameters. It is the default and correct behavior to pass by reference so it should require 0 syntax to accomdate (known as common sense).

Should there be syntax to pass an object by value? Maybe. Or you could just create a new object to pass in by reference because thats what the compiler is going to do anyway. Either way those would happen so infrequenly the extra syntax or hoop would be acceptable. And please notice I said passing objects by reference, other variables should be passed by value default.

83 Name: dmpk2k!hinhT6kz2E : 2008-08-11 05:30 ID:Heaven

> It is the default and correct behavior to pass by reference

Elaborate?

84 Name: #!/usr/bin/anonymous : 2008-08-11 20:07 ID:Heaven

Yeah, I think pass-by-reference is about the dumbest part of most languages that support it.

In FORTRAN it was so bad you could accidentally change the value of CONSTANTS like "4" if you weren't careful...

85 Name: #!/usr/bin/anonymous : 2008-08-11 20:35 ID:svxdzyWV

>>83

On OOP, you create and work with an instance of an object. If you need to work with the instance of the object it makes complete sense that you would pass a reference to that 1 true instance of that object to a different scope.

If you pass an object by value what you are doing is creating another copy of that instance that is seperate from the original instance. So if you work with a copy, then you need to sync up those copies at some point or some other such extra work.

C++ makes you pass in the value of the reference to that object (because everything is pass by value), which is extra syntax. However, when it comes to objects you will always want to be working with the instances you create, so most of the time (if you are doing it correctly) you are using extra syntax to properley work with an object.

While I never pass the value of objects because its just bad OOP, it might be needed in some weird case so it sould be accounted for and that should require the extra syntax.

>>84

I think its just the poor implementation in FORTRAN. Modern OOP languages that have a distinction between value and reference variables make working with OOP more effective than C++.

C++ is an extention to C to accomodate many different programming models, so I understand why this can't be. But C++ would be a good language if they ditched the reliance on C and went full OOP (as C++'s most popular use is OOP).

MS did this with C#. So if someone wants to lear a C syntax language that is truley OOP then they should go with C#. Also C# is great because of the way it implements templates. C++ is still better than Java at runtime with templates but its still lacking as its just a macro basically. (and don't go on a trip about how C# is locked in to Windows and the desktop because it really isn't).

86 Name: #!/usr/bin/anonymous : 2008-08-11 21:06 ID:Heaven

>>85

C++ programmers typically pass pointers to objects, and do not normally copy the object itself. However, they do have the option of pointer, copy, and reference access:

void foo_pointer   (Object *x);
void foo_copy (Object x);
void foo_reference (Object &x);

They don't usually refer to these things as "call by value" or "call by reference" - those terms are uncommon amongst C++ programmers, but common amongst Visual Basic programmers where there are no pointers.

87 Name: dmpk2k!hinhT6kz2E : 2008-08-11 21:16 ID:Heaven

> If you pass an object by value what you are doing is creating another copy of that instance that is seperate from the original instance. So if you work with a copy, then you need to sync up those copies at some point or some other such extra work.

Could you provide an example of this?

I'm a fan of being explicit about mutation and restricting possible scope of change. I think passing by reference isn't worth the hazard it presents, at least in a high-level language.

If everything has pass by value semantics, I can be confident in the state of an object, even if I pass it to other methods; it will never change unless I explicitly assign to it.

If globals are generally a bad idea, I don't see why pass by reference should be any different. I think the latter is a restricted from of the former, and should be marked explicitly -- here be dragons.

88 Name: #!/usr/bin/anonymous : 2008-08-12 00:57 ID:DiEZa2yN

>>80

Well, better than being plain stupid.

> As with many things, the C standard doesn't specify the down-and-dirty method of implementation for e.g. automatic variables, alloca() or varargs functions.

It does specify crystal clear "vararg" functions.
Are you perhaps confused with K&R1 which refused to explain how one would define a function similar to printf?

89 Name: #!/usr/bin/anonymous : 2008-08-12 18:42 ID:svxdzyWV

>>86

>C++ programmers typically pass pointers to objects, and do not normally copy the object itself.

Yes, and my gripe is the extra syntax required to do something that should be the default behavior.

>They don't usually refer to these things as "call by value" or "call by reference" - those terms are uncommon amongst C++ programmers, but common amongst Visual Basic programmers where there are no pointers.

It is not just VB programmers, its any programmers of modern OOP languages like VB, VB.Net (which is different from VB, just has the same style of syntax), C#, Java and others.

VB properly abstracts pointers. Passing any object to another scope passes a pointer (well the value of a reference) to that scope by default. It takes extra syntax to copy an object to mimic passing a value because its not something one would typically do in OOP.

>>87

The idea here is that once you create an instance of an object, when you work with that object you are always working with that instance. Lets say I have a car object and need to pass it to the paint function. Paint takes the car to paint and the color to paint it as parameters. If I pass car by value, a new copy of the 1 car I need to paint is made, it is then painted and then..... uhhhh mmm. What I wanted to do was paint the car, not a copy. So my paint function will need to return a painted car, I will then need to sync up the state of the returned car and the car I passed in. I will need to do extra work outside of the paint function, which makes little sense because the paint function needs to paint the car and be done.

If a reference is passed in, then the 1 car I am working with is painted and the paint function doesn't need to return anything. Once it is done executing I have a painted car.

As for globals, there are many reasons they are bad and the opposite is true for reference type variables.

One reason globals are bad and reference type variables are not is that a programmer can tell when a reference variable is likely going to be changed but can't for a global.

My paint function would have a signature that includes the car and the color. Just from the signature of the function (the parameters it requires) I can tell that the car can be modified. If my paint function modified a global variable I cannot tell from its signature that it has anything to do with a global and have to check it line by line. Any function can modify a global, but only function that have a signature with a car in it can modify an instance of a car, and it will only modify the instance that is passed in to it.

Also, syncing state between 2 objects that initially start of as copies is, needlessly comples, error prone, and requires more memory.

90 Name: dmpk2k!hinhT6kz2E : 2008-08-12 19:05 ID:Heaven

> So my paint function will need to return a painted car, I will then need to sync up the state of the returned car and the car I passed in.
painted_car = car.paint()

Unless you're multithreading, there is no need to sync an object; what you get back from the call is the newest version of the object. And only crazy people want to use references with threads.

> One reason globals are bad and reference type variables are not is that a programmer can tell when a reference variable is likely going to be changed but can't for a global.

How can they tell if a referenced variable will change? Here's a method call:

foo.bar( baz )

Will baz change or not? You don't know, unless you look at bar(), and all method calls inside bar() that use baz, and all children calls in turn that use baz. I feel pretty strongly that's bad news.

By comparison, if it's a value, you know if baz will change: since you haven't used assignment here, no.

> Also, syncing state between 2 objects that initially start of as copies is, needlessly comples, error prone, and requires more memory.

The only one I agree with is the increased memory usage, and even that can be mitigated. Note that I said semantics. In a high-level language, what the machine code does underneath isn't really a concern; if the compiler can prove that no modifications will be made -- which is trivial with copy semantics -- then it'll pass a reference along. If it can't, use a copy-on-write scheme. Or just copy it.

If you really need actual references as an optimization, you can use it. I just disagree it should be the default.

91 Name: #!/usr/bin/anonymous : 2008-08-13 01:23 ID:esnnjsPP

> It is not just VB programmers, its any programmers of modern OOP languages like VB, VB.Net (which is different from VB, just has the same style of syntax), C#, Java and others.

VB has references:

Sub Foo(ByRef X As String)
X = "Foo"
End Sub

C++ has references:

void Foo(string &X) {
X = *new string("Foo");
}

Perl has references:

sub Foo {
$_[0] = "Foo";
}

FORTRAN only has references. Java does not. Smalltalk does not. Common-Lisp does not. Most lisps do not. Python does not. Most C++ programmers don't use them. Perl goes through enormous contortions to detect problems at run-time caused by references. As far as I know the only language with the encouraged and pervasive use of references is Visual Basic.

Perhaps you're confusing references with something else?

92 Name: #!/usr/bin/anonymous : 2008-08-13 04:09 ID:svxdzyWV

>>90

Is paint a static method of car? Cars don't paint themselves so having a paint member on car wouldn't make sense. In a proper model, car would be part of some carFactory class or some helper function in an appropriate namespace. In OOP just because you want to do something to an object does not always mean that class should be the one doing it.

>How can they tell if a referenced variable will change? Here's a method call: ...

I can tell you that the bar member will use baz (I mean really it should) and can modify it. To know if it does change you do of course have to inspect bar. To see everywhere that baz will change you have to inspect all the functions that have baz in the signature (or the scope its created in of course). Now if we have a global, it could be changed in bar. It could be changed anywhere. I have no idea where in the program that global is going to be used, it can be used and changed anywhere.

Changes to the reference are limited and easy to indentify where they are potentially going to happen. Changes to globals can happen anywhere and the entire code needs to be inspected.

>By comparison, if it's a value, you know if baz will change: since you haven't used assignment here, no.

The problem with the value of baz is baz is now not the object you passed in.

>>91

A reference variable is one that evaluates to a pointer. Your examples are true enough, but what I am talking about is objects and OOP.

In VB, VB.Net, C#, Java and some other languages there are 2 types of variables. Value types and reference types. Value types evaluate to the value of the variable (things like numbers and strings) and reference types evaluate to a pointer to the value (all object variables).

In VB you would never want to do this:
Sub Foo(ByRef X As Object)

X = New Object

End Sub

What you have just told it to do is pass a reference to the reference passed in. What you want is the value of the variable to pass in because all object variables are references. So you would want the signature to read: "ByVal X As Object" to get the reference to the object being passed. (The VB compiler actually isn't that dumb and will treat the above function as if the X variable is passed by value automatically and strangley without a warning).

Value type variables can be passed either by the value or by a reference to it.

The distinction between the 2 types of variables makes OOP easier and less prone to mistakes as the language and compiler treat the variables properly. C++ does not can cannot have this distinction so it is up to you to add the etra syntax to properley work the OOP way.

Also, all variables in Python are references.

93 Name: dmpk2k!hinhT6kz2E : 2008-08-13 06:50 ID:Heaven

> Cars don't paint themselves so having a paint member on car wouldn't make sense.

Sure, but that's besides the point -- if you don't like the example, substitute foo/bar/baz. I draw your attention to the assignment.

> To see everywhere that baz will change you have to inspect all the functions that have baz in the signature

Indeed. That's a big problem. Maintain any non-trivial codebase and this is an disaster waiting to happen. Let's say there are two method calls that use the same object and you have call-by-reference, does their order matter? You don't know without knowing the innards of all its children. If you ask me, that's taking the principle of least knowledge in the back alley and gang-raping it.

> It could be changed anywhere.

Right. And references can be changed anywhere in your child call hierarchy, which is a bit of an improvement, but leaves a lot to be desired for code comprehensibility. Now how about restricting it to local scope so reasoning about it becomes pretty easy?

For example, random code:

x = [ "hay", "guyz" ]
y = foo( x )
z = bar( x )
puts( x.join )

What will it print if we're using call-by-value semantics? Call-by-reference?

> The problem with the value of baz is baz is now not the object you passed in.

You're going to have to demonstrate how this is a problem. If you change baz inside a method/function, and you want to keep the changes, return it and assign back to baz -- or better yet, give it a new variable with a descriptive name. It's very clear to any maintenance programmer that something might have changed with baz. It's very clear to you too in several month's time.

94 Name: #!/usr/bin/anonymous : 2008-08-13 12:39 ID:6cdvz5iK

> VB, VB.Net, C#, Java

None of these are very good examples of proper OO languages. Assuming that what they do is the "proper OO way" is pretty naïve.

C++ is far from a proper OO language too, but your argument that it's bad because it doesn't work like the Java-inspired language family is really not valid in any way.

95 Name: #!/usr/bin/anonymous : 2008-08-13 13:31 ID:esnnjsPP

>>92

Hijacking terminology makes talking with you very difficult, and you're using definitions that other people in this field don't use.

References are not the same thing as pointers. Variables aren't "evaluated" except in interpreted languages.

You're complaining that C++ makes you write:

void Fun(Object *Foo);

when you want to write:

void Fun(Object  Foo);

despite the fact that would confuse C and Objective-C programmers. Neither of those are references. Saying "reference" to someone who knows C++ makes people think you are talking about this:

void Fun(Object &Foo);

which is identical to VB's ByRef which stands for by reference. It just so happens that C++ and VB share a definition of Reference.

If you wanted to be understood, you would say "I hate that C++ doesn't automatically make all class-variables pointers to classes by default"

Then we could have a meaningful discussion about what's involved in that, why that would be good, and why it would be bad.

Instead you come off as critiquing something you don't understand, and you really don't know what you're talking about. Saying things like "the OOP way" and "proper OO languages" reinforces this.

It makes it seem like you believe that Object Oriented Programming Languages never existed before Visual Basic. Now by confusing references and pointers, Java and Python can be Object Oriented languages too- but these are also very young indeed!

96 Name: #!/usr/bin/anonymous : 2008-08-13 16:55 ID:svxdzyWV

> Sure, but that's besides the point -- if you don't like the example, substitute foo/bar/baz. I draw your attention to the assignment.

If I need to pass some object foo to a function bar, and that function needs to change foo, it would make the most sense to give that function foo to change. Giving it foo and returning baz and then making baz = to foo outside of the function means the function didn't accomplish what it needed.

I should be calling foo(bar).
Calling bar = foo(bar) isn't really all that OO. And if I need to return a result from foo, such as if it was successful or not I need to add more complexity.

if(foo(bar))

is better than

bar = foo(bar, &baz)
if(baz)

(or even worse checking for expected results of the foo call on bar.

both will work and are readable enough, the the 1st one follow OO design better. It encapsulates what foo is doing much better. Unless a function creates a new instance of an object or passes instances between application tiers, returning objects from functions isn't good OO design.

>Let's say there are two method calls that use the same object and you have call-by-reference, does their order matter?

I would say its as easy as knowing what each call accomplishes. You don't need to examine the code.

97 Name: #!/usr/bin/anonymous : 2008-08-13 17:42 ID:svxdzyWV

>>95

I didn't hijack anything. I am not sure if you understand the difference between value type and reference type variables.

>void Fun(Object &Foo);which is identical to VB's ByRef which stands for by reference.

For example, that statement is not true.

void Fun(Object &Foo);

In VB.Net would be:

Sub Fun(ByVal Foo As Object)

It is ByVal becase Foo is a reference (because it is an object which makes it a reference type variable), and you wouldn't pass a reference by reference. You pass in the value of the reference.

>If you wanted to be understood, you would say "I hate that C++ doesn't automatically make all class-variables pointers to classes by default"

No, because pointers wouldn't be the answer. What I am saying is that C++ isn't very good at OOP because it does not treat objects, the basis of OOP, differently from other variables in a way that lends itself to the style of OOP.

>It makes it seem like you believe that Object Oriented Programming Languages never existed before Visual Basic.

I am using commonly used OOP languages as examples because it makes for a more practical discussion.

Also, VB and VB.Net are very different languages. VB isn't very OOP because it doesn't cover other OOP basic concepts like inheritence well so sticking to VB.Net is better.

And just to add, C++'s multiple inheritence is fucking evil.

98 Name: dmpk2k!hinhT6kz2E : 2008-08-13 17:56 ID:Heaven

> Giving it foo and returning baz and then making baz = to foo outside of the function means the function didn't accomplish what it needed.

It did. The change is available in the object being returned. There is no functional difference except that one is being explicit about change.

> Calling bar = foo(bar) isn't really all that OO.

I like purity, but I'm more interested in what works. The example above isn't OO (it's procedural), but let's run with the idea. Why do you care if it's OO or not? Think carefully about why OO exists, and we'll argue about it.

> is better than

Actually, I think both are poor pieces of code. For the first, why are you mutating an object like that inside a comparison? The same applies with foo() in the second: you're trying to do too much with one function.

Also, sane languages allow multiple return values, but if you're getting multiple return values -- which is what you're attempting with the second example -- that's a code smell.

> It encapsulates what foo is doing much better.

How? Both have exactly the same external effect: change bar and return a status about the change.

> I would say its as easy as knowing what each call accomplishes.

Well then, what does each one print? Give it a try and tell me what you'll get with value and reference semantics.

99 Name: #!/usr/bin/anonymous : 2008-08-13 18:50 ID:m2TJrD+R

> I didn't hijack anything.

You're redefining terms to mean something other than their accepted meaning.

> void Fun(Object &Foo);
>
> In VB.Net would be:
>
> Sub Fun(ByVal Foo As Object)

No it wouldn't, because if Fun modifies Foo by assignment, that is using:

Foo = Bar;

then the C++ value modifies the Foo as seen by the caller whereas it doesn't modify the Foo as seen by VB.NET's caller.

http://www.cprogramming.com/tutorial/references.html

> What I am saying is that C++ isn't very good at OOP because it does not treat objects, the basis of OOP, differently from other variables in a way that lends itself to the style of OOP.

I don't think you have any idea what you're talking about. You clearly do not understand C++.

> I am using commonly used OOP languages as examples because it makes for a more practical discussion.

You're using VB and VB.Net because you don't know any other object oriented languages. I cut my teeth on Simula 67, so I give the term "Object Oriented" a quite wide berth

> Also, VB and VB.Net are very different languages. VB isn't very OOP because it doesn't cover other OOP basic concepts like inheritence well so sticking to VB.Net is better.

sigh

Demonstrating you know something about VB and VB.Net doesn't demonstrate that you know anything about C++.

> And just to add, C++'s multiple inheritence is fucking evil.

Like this for example. Perl supports multiple inheritence. Python supports multiple inheritence. CLOS supports multiple inheritence. Eiffel supports multiple inheritence (sortof).

There's nothing wrong with multiple inheritence: It solves very real problems which is why C# and Java have added interfaces, which solve some of those problems, without the ability to share code.

100 Name: #!/usr/bin/anonymous : 2008-08-13 20:26 ID:Heaven

>>97

> What I am saying is that C++ isn't very good at OOP because it does not treat objects, the basis of OOP, differently from other variables in a way that lends itself to the style of OOP.

Some languages approach OOP with a much greater focus on message passing instead of objects. I was introduced to OOP via C++, but when I got into languages like Lisp, Smalltalk, et al., I realized you can do OOP in a variety of ways. I think you should look into these sometime and expand your view of OOP and how it can work.

101 Name: #!/usr/bin/anonymous : 2008-08-14 01:35 ID:svxdzyWV

>>98

>There is no functional difference except that one is being explicit about change.

Yes, but the big difference is the scope of where the change is taking place. Its better encapsulated to change the object in the function that is repsonible for making the change, instead of creating a copy of the object and setting it in the scope of the call.

>Why do you care if it's OO or not? Think carefully about why OO exists, and we'll argue about it.

I care in this case because its a discussion of C++'s OOP abilities.

>For the first, why are you mutating an object like that inside a comparison? The same applies with foo() in the second: you're trying to do too much with one function.

You know, I thought about that after I wrote it. For clarity it should set the value of some bool that was created to store the result.

>How? Both have exactly the same external effect: change bar and return a status about the change

Almost. Bar changes foo. Or Bar changes a copy of foo and you need to set foo to bars return in the calling scope. The actaul changing of the passed in parameter happens in bar if its a reference. The actual change of foo happens in the calling scope if its a value and bar return the result.

>>99

>You're redefining terms to mean something other than their accepted meaning.

Not exactly. The term evaluate does not always mean a function that exectures arbirary code at runtime. Especially in the context I used it. I used the word evaluate because Sun's Java docs used it.

>then the C++ value modifies the Foo as seen by the caller whereas it doesn't modify the Foo as seen by VB.NET's caller.

Are you saying that if I passed in an object Baz in VB.Net to:

 Sub Fun(ByVal Foo As Object)
Dim Bar As New Object
Bar.Color = "blue"
Foo = Bar
End Sub

by calling something like:

 Dim baz As New Object
Baz.Color = "red"
Fun(baz)
Print Baz.Color

that it will print "red" as the color? Because it will print "blue". Just as a similar block wriiten in C++ with "void Fun(Object &Foo);" would.

I hope you chose to modify Foo by assignment to illustrate a point because that is just not practicle in most cases and usually a bad coding choice.

>I don't think you have any idea what you're talking about. You clearly do not understand C++.

I do and you need to remember I am focusing on its OOP abilities. I understand it is limited in many respects by its compatibility with C and it other programming paradigms. This is what I am pointing out.

>You're using VB and VB.Net because you don't know any other object oriented languages.

I was using VB.Net because another poster brought it up.

I was not claiming C++ is the only language with multiple inheirtence. But comparing it to Python is a little odd as Python's implementation is limited (but still troublesome as I feel all implementaiton of multiple inhieritence are).

>There's nothing wrong with multiple inheritence

I would say there are problems with the way its implemented most of the time such as in C++. After working with OOP languages that don't support it, I find that it allows for better class creation. I know there are times I wish I could use it but am much happier with the heirarchy after not doing so. I have found the members of the heirarchy to be more extensible down the line instead of trying to cram it all in to the fewest number of classes.

102 Name: 99 : 2008-08-14 02:18 ID:esnnjsPP

> I care in this case because its a discussion of C++'s OOP abilities.

Which you have yet to justify as anything less than "I don't like typing the asterisk", without explaining what the real problem is.

I program in CL most days and I don't like typing the parenthesis. As soon as someone comes up with a way for me to get some of the flexibility I get out of CL without doing it I'll be a happy guy. Until then, I keep my bitching by-and-large to myself.

> Especially in the context I used it. I used the word evaluate because Sun's Java docs used it.

No they don't you liar. I challenge you to find a place on sun.com that says "A reference variable is one that evaluates to a pointer."

> Are you saying that if I passed in an object Baz in VB.Net to
> (snippets omitted)

No.

I said:

Sub Foo(ByRef X As String)
X = "Bar"
End Sub
...
Dim Y
Foo(Y)
Print(Y)

and:

void Foo(string& X) {
X = *new string("Bar");
}
...
string Y;
Foo(Y);
cout << Y << endl;

are equivalent, and that this is what a C++ programmer calls a reference. The following:

Sub Foo(ByVal X As String)
X = "Bar"
End Sub

and:

void Foo(string* X) {
X = new string("Bar");
}

are also equivalent; they do nothing except waste memory and time because the above change to "X" doesn't affect the caller of Foo. These are what C++ programmers call pointers and what VB programmers call Objects. Some very misguided tutorials refer to these as reference types. Those tutorials are usually written by non-programmers.

> I do and you need to remember I am focusing on its OOP abilities. I understand it is limited in many respects by its compatibility with C and it other programming paradigms. This is what I am pointing out.

You have no idea what you're talking about. CLOS, Smalltalk and Simula pioneered OOP in three completely separate respects: method resolution, message passing and data hiding. Visual basic is about as object-oriented as a bucket of rocks; barely meeting some very loose definition of the term "Object".

C++ uses objects in the simula-sense. Java uses them in the smalltalk-sense. Saying one is more object-oriented than the other is retarded.

C++ has a lot of problems, but being "less object oriented than visual basic" is a crap load of shit.

> I would say there are problems with the way its implemented most of the time such as in C++.

And that's because you matter how?

Either justify a broad statement like "C++'s multiple inheritence is fucking evil." or shut the fuck up, and at this point I'd prefer the latter; you don't seem to have anything interesting to add.

> After working with OOP languages that don't support it, I find that it allows for better class creation.

You're wrong.

> I know there are times I wish I could use it but am much happier with the heirarchy after not doing so.

Double wrong. You've never used it before. You're a big fat liar.

> I was not claiming C++ is the only language with multiple inheirtence. But comparing it to Python is a little odd as Python's implementation is limited (but still troublesome as I feel all implementaiton of multiple inhieritence are).

Liar liar pants on fire.

> I have found the members of the heirarchy to be more extensible down the line instead of trying to cram it all in to the fewest number of classes.

Wronger than wrong.

Multiple inheritence creates more classes, not less. It gives you the ability to hook method implemention into your interface classes, and specifies a method resolution order for interacting with that data.

It's not evil, but it is surprising if your mixins interact with local data. This is why C++ programmers recommend you don't do that.

103 Name: dmpk2k!hinhT6kz2E : 2008-08-14 04:56 ID:Heaven

> Its better encapsulated to change the object in the function that is repsonible for making the change,

What details about the change are you exposing by assigning from a return?

> instead of creating a copy of the object and setting it in the scope of the call.

This makes explicit that there was change, but doesn't say what was changed in the object or how. I don't see what you gain by hiding this fact.

> The actual change of foo happens in the calling scope if its a value and bar return the result.

Sure, but as I've argued above I believe this is superior. It guarantees that the fact there was a change is known, although not what the change was. There's much less chance you'll break something by reordering calls.

Hell, I'll go even further and say that single-assignment is a good idea. Then there's zero chance you'll break something by reordering, since you can't reuse a variable name. Of course, that's mutually exclusive with loops, so it only works in languages that use recursion solely.

104 Name: dmpk2k!hinhT6kz2E : 2008-08-14 05:14 ID:Heaven

>>102

> As soon as someone comes up with a way for me to get some of the flexibility I get out of CL without doing it I'll be a happy guy.

Forth!

I kid. I have hopes for Factor though. It's actually growing useful libraries.

> No they don't you liar.

Let's keep this civil? D:

105 Name: 99 : 2008-08-14 13:26 ID:Heaven

>>104

I do enjoy Forth a lot, but sadly I don't think quite as well in Forth as I do in CL.

I have a hard time getting excited about Factor because it seems to combine the worst parts of both languages, and it seems far more like postscript than like Forth.

Although the real reason I haven't given Factor a real effort is that it doesn't work very well on my machine (Xv), and I haven't heard quite enough praise to work past the technical problems.

106 Name: #!/usr/bin/anonymous : 2008-08-14 16:47 ID:svxdzyWV

>>102

>Which you have yet to justify as anything less than "I don't like typing the asterisk", without explaining what the real problem is.

As I have stated multiple times, the default behavior of C++ does not treat object variables as an instance of the object in various scopes. It is left up to the programmer to implement extra syntax and logic to to treat object instances properley.

>No they don't you liar. I challenge you to find a place on sun.com that says "A reference variable is one that evaluates to a pointer."

Well a quick search shows that the word evaluate is used in many different contexts in Sun's Java docs.

>No.
>I said:
>...
>Some very misguided tutorials refer to these as reference types. Those tutorials are usually written by non-programmers.

This is where there is a disconnect. Your example uses strings which are value types in VB.Net and not treated the same as objects. I am talking about objects which are reference types.

Here is an early article by Jeffrey Richter about those types.
http://msdn.microsoft.com/en-us/magazine/cc301569.aspx
Who is Jeffrey Richter? He has contributed both design and code to the following products: Windows (all 32-bit and 64-bit versions), Visual Studio .NET, Microsoft Office, TerraServer, the .NET Framework, "Longhorn" and "Indigo".
So I think he knows what he is talking about.

>Visual basic is about as object-oriented as a bucket of rocks;

I have already agreed with you on that. My statement was regarding C++.

>Saying one is more object-oriented than the other is retarded.

It is possible to say a language is more OOP than another.

>Either justify a broad statement like "C++'s multiple inheritence is fucking evil." or shut the fuck up, and at this point I'd prefer the latter; you don't seem to have anything interesting to add.

Let's focus on one thing at a time. First we need to clear up your misconcpetions about reference variables.

>>103

>What details about the change are you exposing by assigning from a return?

The very detail that the change occured to the object passed in.

>This makes explicit that there was change, but doesn't say what was changed in the object or how. I don't see what you gain by hiding this fact.

Making the change through assignment won't tell you what in the object changed either. You gain less memory overhead and repetition. If a function that takes an object and changes that object, one would expect the object to be passed in to the the one changed. Why would one write a function that takes an object, creates a copy of it, changes that copy, and returns the copy? If one did not want to change the original instance of the object, a copy of that object shoud be made by the programmer in the scope of the call and pass in the copy.

>Sure, but as I've argued above I believe this is superior. It guarantees that the fact there was a change is known, although not what the change was.

The implementation of the function should let one know that the object was changed. In the case where one knows nothing about the code they are looking at and what it does, in the scope of the call it is more obvious that the object may have changed. But at the same time, you need to know a little about the code you are working with. One doesn't just start calling functions without knowing what they do first.

>There's much less chance you'll break something by reordering calls.

Reordering calls to what exactly?

If I had:
foo = one(foo)
foo = two(foo)
foo = three(foo)

I could do the same (as a reference) with:
one(foo)
two(foo)
three(foo)

Re-ordering those calls in either case wouldn't affect anything the other doesn't affect, at the end of each call, food is the same.

This thread has been closed. You cannot post in this thread any longer.