Home Messages Index
[Date Prev][Date Next][Thread Prev][Thread Next]
Author IndexDate IndexThread Index

Re: [News] Linux Protection Against Buffer Overflow-based Vulnerabilities

Happy new year, Rob, and welcome back.

__/ [ BearItAll ] on Tuesday 02 January 2007 10:30 \__

> Roy Schestowitz wrote:
> 
>> Protect Linux / UNIX application from buffer overflows
>> 
>> ,----[ Quote ]
>> | A Buffer overflows is a serious security problem. It allows an
>> | attacker to inject executable code of their choice into an
>> | already-running application. With such problems in mind, Berger
>> | created a new program that prevents crashing and makes users safer.
>> `----
>> 
>> http://technocrat.net/d/2007/1/1/12815
>
> Lazy programming, and I'm guilty too, has been the main cause of the buffer
> over/under run problems. These are reducing, it looks like the code has
> been fixed, no doubt a few will have been missed but I'm sure the
> programmers have it in mind and are watching for it. We know that they are
> because many an update of individual packages is concerned with 'over run
> possibility', so they are fixing these before they can become a problem for
> users.


Some time ago I read in Schneier on Security that Red Hat have used some
arbitration which makes overflows like this very hard to exploit. The whole
FSM (or uniformity) assumption gets broken. Vista is catching up in the
sense that it works with BIOS companies and OEM's. Of course, Vista already
has known vulnerabilities and at least one public (0-day) exploit.


> There was a time, in my assembler days, when I would make all non-static
> buffers ring buffers, so that even after calculating the neeeded size plus
> a bit just in case, the ring buffer would mean over flow simply wasn't
> possible. (most of my programming then was hardware and comms, so I had
> ring buffers in there anyway). So that what might have been an over flow
> may well damage data because it rings round before the data is dealt with,
> buit it will not flow out of the buffer. Of cause the program would likely
> still crash because the data was now wrong, but you can't have everything
> :)


Remember the BSoD's in Windows 95? The ones that you could invoke on a remote
PC which had a faulty (leaky?) TCP/IP stack implementation? Wasn't it just
nicked from BSD anyway...? Crashes are not as harmful as hijacking of PC's,
but they are annoying nonetheless. Not to worry. Word and other programs
already assume instability, so saving of work to disk is a frequent routine.


> But now we know all buffers can be dynamic, there are many examples of such
> classes around. If the environment isn't time/space critical, there is no
> real reason why statics can not be dynamic too. So really there isn't an
> excuse for buffer over flows any more, just a matter of finding the old
> ones still hidden away in a lib somewhere. There certainly is no excuse for
> buffer overflow on interfaces because these should be ring buffers anyway.


The question is: is the dynamic nature predictable? Maybe that's the reason
Microsoft thinks about BIOS... system clock for randomisation/seeding?


-- 
                        ~~ Kind greetings and happy holidays!

Roy S. Schestowitz      |    GPL'd Reversi: http://othellomaster.com
http://Schestowitz.com  |  RHAT GNU/Linux   ¦     PGP-Key: 0x74572E8E
         run-level 5  Oct 18 14:45                   last=S  
      http://iuron.com - help build a non-profit search engine

[Date Prev][Date Next][Thread Prev][Thread Next]
Author IndexDate IndexThread Index