2007-07-26, 21:55 | Link #41 | |
Yummy, sweet and unyuu!!!
Join Date: Dec 2004
|
Quote:
ATI must surely realise that they need to do something about the OSOS market, I mean they have they seen how hard it is to get Beyrl working with an ATI card
__________________
|
|
2007-07-26, 22:02 | Link #42 |
You could say.....
Join Date: Apr 2007
|
Good to hear I really want to buy a HD2400xt for my HTPC but there's nothing for Linux users. I mean nvidia has already got all their stuff together for the High definition 8xxx series within weeks of release, which were stable.
I can never understand why ATI sucks at this. considering they and their owners AMD are lagging behind their competition (Intel and Nvidia), you'd think that they'd be on their bike trying to make their product as accessible for all platforms as possible. AMD's marketshare is about 20%, ATI's would probably be around 30%. (Speculation) No-one I know using Linux will touch ati with a 10ft pole. Granted Linux users make up about 10-15% of all home O/S's still it's silly of them to just flat out suck for so long with Linux support. Out of the box, from my experience ATI products offer better performance out of the box over nvidia, it's just that nvidia has better support, and more overclocking capabilities. I'll hold off buying an 8500gt for a couple of months, hopefully ATI will have released something so I can at least have a choice on upgrading a 7300gt. Now is only there was someway to get proper on the fly Dolby Digital or DTS encoding on Linux. Highly unlikely though. |
2007-07-27, 10:21 | Link #43 | |
AS Oji-kun
Join Date: Nov 2006
Age: 74
|
Quote:
I use the proprietary NVidia driver which works fine for me. Luckily there are pre-built kernel modules at Livna so updates generally happen in sync with kernel updates. I'm glad Dell is leaning on ATI, but I'm happy with my NVidia setup as it stands. There's also been repeated discussions about a standardized "application binary interface" that would enable companies to write proprietary drivers that wouldn't need kernel source code "shims." I believe Linus opposes this whole notion, though, so I doubt it's going to happen any time soon. I just wish there were some good resolution to the problem of proprietary binaries working with Linux. These issues have been around for nearly a decade now and show little sign of being resolved. If anything, the debate over GPLv3 suggests the situation may worsen, not improve.
__________________
|
|
2007-07-28, 16:35 | Link #44 | |
Gregory House
IT Support
|
Linus replies to Con Kolivas and addresses the scheduler issue.
Quote:
__________________
|
|
2007-07-29, 05:31 | Link #45 |
Asuki-tan Kairin ↓
Join Date: Feb 2004
Location: Fürth (GER)
Age: 43
|
Honestly I don't know which strategy is better... Completely Fair Scheduler works in that it allows the task with the longest wait time to process next. It will support certain levels of task importance and have a detection of sleeper tasks (The problem with sleeper tasks is, that such tasks will immediatly give free the CPU for other tasks when they are still in sleep mode - thus they trigger a useless scheduling event ...scheduling consumes CPU time too. With lots of sleeping tasks this can be a considerable performance killer since the scheduler would needlessly waste performance for cycling sleepers, therefore this approach needed a sleeper detector).
The Staircase deadline scheduler lets tasks run according to their priority that gives them a certain place on the staircase and a certain quota (of CPU time). When a process uses its quota partly it is dropped in prioriy to the next level/stair. Each level/stair has its own quota, so if the quota for an entire level/stair is reached all its tasks drop to then next lower level. Tasks that completely used their quota will be placed in a special expired area where their priority is restored to the original value (but they cannot run there). The algorithm proceeds until all tasks are either in the expired area or on the lowest level/stair and the quota of this level is used up. Then all tasks are moved into the expired area and their priority levels are restored. Then the whole expired area is turned into an active area and the whole process of tasks going down the staircase starts again. In completely fair scheduling time and task priority is the decission factor for allowing CPU time. In staircase deadline scheduling it is priority and quota (deadline). Both algorithmic ideas seem to be efficient imo, however I think the staircase deadline approach is a little more complex. But the staircase deadline scheduler is by design optimized to deal with sleepers. High priority tasks that sleep will not use much quota but still drop down a level each time they run on the CPU. So scheduler actualizes that there are no sleep cycles that waste processor time of low priority tasks. Since all the sleeper tasks will move down to the same low stair/level each time they are given processing rights on the CPU. Since I don't know how the sleeper task detector in completely fair scheduling works, it is hard to compare the two. Though it seems to me, that both approaches are quite well suited for fair scheduling. The completely fair scheduler might be even better, depending on how well the sleeper detector works. Now let me finally explain how an ideal scheduler should work: 1) scheduling the processing time of each available task completely fair according to their needs and priority (e.g., a sleeping task doesn't have a need to be executed) 2) Doing 1) but using the lowest possible time and effort to do it. The scheduler itself should use as little as possible processing time itself. 3) Finding the best trade off between 1) and 2). Usually one can increase fairness by making the scheduler more complex which increases the load of the scheduler on the overall available performance. One can decrease complexity and therefore load of the scheduler when using simple scheduling, which might not be very fair. Dependend on the usage of the scheduler the optimum for this is different for different systems. On multiprocessor systems I could imagine that the staircase deadline approach will increase more in complexity than the completely fair scheduler approach. (But this is according to my gut instinct not a scientific conclusion)
__________________
|
2007-07-29, 05:46 | Link #46 | |
Yummy, sweet and unyuu!!!
Join Date: Dec 2004
|
Quote:
Even if we ignore the statements about Con's attitude to the bug reporters, I personally would always take a proven developer over an unknown. http://kerneltrap.org/node/14008
__________________
|
|
2007-08-04, 23:04 | Link #47 | |
Gregory House
IT Support
|
Good news for Red Hat fans (*points at SeijiSensei* ).
Quote:
__________________
|
|
2007-08-07, 03:05 | Link #48 |
Yummy, sweet and unyuu!!!
Join Date: Dec 2004
|
Lenova are offering SLED on some of their laptops, hopefully the market will see some Compiz/Beryl eyecandy preinstalled to give Vista a run for its money.
http://news.bbc.co.uk/1/hi/technology/6933859.stm So out of the big 3 we have: Dell - Ubuntu Lenova (IBM) - SuSE HP.... Come on Red Hat pull your fingers out!
__________________
|
2007-08-08, 18:14 | Link #50 | |
Mew Member
IT Support
Join Date: Aug 2007
Location: Ontario, Canada
Age: 39
|
Quote:
|
|
2007-08-08, 18:42 | Link #51 | ||
Gregory House
IT Support
|
Quote:
Code:
./configure && make && make install BTW, I should remind you of the first post of this thread: Quote:
__________________
|
||
2007-08-08, 20:04 | Link #52 | |
Mew Member
IT Support
Join Date: Aug 2007
Location: Ontario, Canada
Age: 39
|
Quote:
|
|
2007-08-08, 21:19 | Link #53 |
Yummy, sweet and unyuu!!!
Join Date: Dec 2004
|
Sorry but I have to answer that point...
More hardware vendors will equal more ease of use for the end users. They are into selling their products, and just as a major non oss software vendor (Novell) has invested in user operability, I believe that the HW vendors will do so too. More users = more money to invest in making more users For these bad boys, it isn't about FSF and Stallman, but about selling stuff and one barrier they all know they need to break through is the user operability one. WK mentioned cmd line apt, I personally haven't gone near the command line for this laptop, everything was done via add/remove. I wanted to experience Ubuntu from the user perspective. OpenSuSE is a lot easier to install apps only with YaST, no need to worry about what dependencies etc. I think that what is happening is a glorious thing, because it will as WK pointed out increase driver compatibility User uptake of Linux has already happened, the user just doesn't know it. Lots of us use it in one form or another without realising. The HW vendors selling it as a main stream OS, gives us the buyer more choice *cheer*, and gives us the user more impact when software/hardware developers produce their products *cheer*. As long as it doesn't flop like yogi bear with no food, I can't think of anything negative from this news.
__________________
|
2007-09-06, 15:16 | Link #54 | |
AS Oji-kun
Join Date: Nov 2006
Age: 74
|
ATI to release open-source video drivers
From the cited blog posting: Quote:
Of course, the ATI decision is being made by its new management after its purchase by AMD. AMD sees its primary competitor, Intel, offering open-sourced drivers for its video and wireless chipsets. It seems unlikely that nVIDIA will not eventually follow suit if only for competitive reasons. Proprietary video drivers have been one of the last major hurdles to developing truly open systems. I'm rather glad now to see that Linus didn't back down from his "no-binary-interfaces" position. Who would have thought that the hardware manufacturers like Intel and AMD would be the ones to fold their hands?
__________________
|
|
2007-09-06, 20:53 | Link #55 |
Love Yourself
Join Date: Mar 2003
Location: Northeast USA
Age: 38
|
I wonder if it has to do with AMD's unveling of their new fusion-type processors, due out in 2009... basically, their plans are to have one (or more) "Bulldozer" cores dedicated to processing graphics. I'd imagine it'll require a completely different set of drivers, and if it's successful, might signal the end of graphics as "cards" as we know it. I doubt it'll knock graphics cards out directly, but it'd be interesting to see...
I was going to link a neat article that I read about the Bulldozer cores, along with the Falcon (desktop) processor and the Sand Tiger (server) but I can't find it in the history of any of my computers. Could've sworn I read it yesterday...
__________________
|
|
|