All posts by webdirectorynow

Pbx Auto Attendant – Handles Incoming Calls Effectively

The PBX auto attendant is the vital component of this sophisticated phone system. It helps business offices streamline their business telecommunications in a very effective manner. The innovative PBX auto attendant handles all incoming calls efficiently and provides callers with a superior and professional communication interface.

Feature-rich Auto Attendant

The VoIP PBX auto attendant greets callers professionally using recorded customized greeting messages. It offers them a list of options to reach the person or department they want. These include dial-by-name, dial-by-department, dial-by-extension and so on. Depending on the callers choice, calls are quickly redirected to extensions, conference rooms, call queues, mailboxes and groups. This system can manage numerous calls coming in simultaneously and routes calls without giving off line busy signals. This reliable call answering system helps you save the expenses involved in maintaining additional reception staff. The core features of the VoIP PBX auto attendant solution include:1-to-5 digit extensionsFind-me/follow-me call forwardingMusic on transferVoicemail to emailDo Not Disturb modeFax to email

Easy to Connect to Remote Extensions

To ensure small and medium businesses round-the-clock telecommunication facilities, the PBX auto attendant performs systematic call routing even to remote extensions. The advanced find-me/follow-me call forwarding allow calls on your primary business numbers to be easily transferred to your personal landlines, cell phones or other VoIP phones assigned as extensions. This helps you to manage your business from anywhere. If all extensions are busy, calls are quickly routed to the voicemail system where callers can leave voice messages. This eliminates the possibility of missing important calls. Important benefits of the virtual PBX auto attendant system are:Minimizes caller on-hold timePrompt response for all incoming callsImproves staff productivitySaves time and money

Pick the Right Auto Attendant Package

Small and medium-sized business offices can enjoy the features of the virtual PBX auto attendant without having to invest in premise-based equipment, software packages and complex networks. The service provider maintains all the necessary equipment and offers installation, upgrades and maintenance. A reliable service provider can help you to pick the right auto attendant package, as well as help you configure and personalize to suit the specific telecommunication needs of your corporate office.

Royalty-Free Army Icons for Military Games and Web Sites

Releasing a great new military game? Designing a Web site for army fans? You’ll need icons for toolbars, application icons, navigation icons and logotypes. Get military icons from a design studio, and you’ll have to spend hours specifying icons, wait days and weeks for the icons to arrive, and spend hundreds of dollars for just a few icons.

Or you can skip the design studio and order an instantly available set of army icons. Over a hundred icons of various military and army icons are available from in Military Icon Set for less than thirty dollars. With icons such as Aim icon, Arrow icon, Maps icon, Dragon icon and Alert icons, the set of military icons can enhance any war game, weapon collector Web site or army portal. Using stock icons from the Military Icon Set saves the worry of ordering custom-made icons, reduces total expenses and speeds up project development.

Military Icon Set offers over a hundred images in a variety of resolutions and formats. Raster icons are provided in 16×16, 24×24, 32×32, 48×48, 128×128, and 256×256 pixel resolutions, with vector images in 3DS Max format available for a fee. 256-color and 32-bit versions are supplied with every order. All True Color military icons are supplied with and without a shadow, and include semi-transparency mask for smooth edges. All military icons are supplied in GIF, PNG, BMP, and ICO formats for quick integration with Web sites and software.

All military icons from the Military Icon Set are designed to match Windows Vista style, and drawn in strict accordance with current Microsoft guidelines. Army icons such as Soldier icon, Army icon, Maps icon, Fire icon, Star Wars icon, Radar and Helmet icons are provided. Small resolution icons can be instantly used in toolbars and dialogs, while the largest size of 256×256 pixels is perfect for logotypes and Web headboards.

Army icons from the Military Icon Set can be previewed online at . After ordering military icons, you’ll get exactly what you see almost immediately. All icons from the Military Icon Set are royalty-free, which means that you can use the icons in as many applications and Web sites as you with without asking for permissions or paying for extra licenses.

Teenage Boys Wear — Tips to Look Fashionable

Hanging out without bothering about whats in and whats not may seem to be the fashion belief of teenagers. But with modernization and new fashion trends hailing in, the craving to look trendy and up to date is what has become a dream for teenage boys. But in attempt of looking fashionable and modern chics often they commit various fashion slips-ups.

Here in this article you will get to explore various decent trends of boys wear that makes you look stylish and contemporary without commending big fashion blunders. Check out:

1) The first and the foremost thing to consider about looking fashionable is to put your hands on clothes that are not only stylish but also fits you comfortably. If you are not comfortable, there are chances that you look dubious for the clothes you are wearing. This will unnecessarily bring you a lot of negative attention.

2) Dont go closely with whats in. Follow a different style standard and pattern of wearing that may not be something completely taken from the moving trends but should go in sync with latest fashion wears making a distinct stand.

3) Dont buy a bulk of teenage wear from a single fashion trend. This may cause a waste of money in case the trend last for short time period.

4) Look for the occasion you are going to attend. They could be, formal, informal or casual girls or boys wear, all you have to do is to select the apt category of clothing suiting to the environment, occasion and weather in synch with latest fashion trends.

5) Choose an apt color combination. As a variety of shades available, shopping for teenage wear is sometimes overwhelming. But make sure the color you wear should not be placed at odds. They should compliment each other and should make you look graceful dressed up in them.

6) Take care of your clothes. Avoid being sloppy and dont wear wrinkly clothes.

7) While going out for some special occasions, buy oxford or button down shirts that fits perfectly with your neck. Take help from the sales associate and ask him to take the apt measurement and then select it aptly.

8) Never wear socks with sandals. This is a big blunder followed by the teenage crowd.

9) Lastly, you can take help from your girlfriend, sister, or your mother who can guide with what suits you.

For more details on Boys Apparel visit on www.naaptol.com

All You Need To Know About A Sport Called Dodgeball

Playing Dodge-ball is really a different experience among all games and sports. In this game, all the players try to hit other players with the ball; at the same time, avoiding getting knocked by any other player. Dodge-ball is the famous sport among young lads, an evening recreational game in community parks and popular among schoolboys.

Dodge-ball is a basic game that is mostly taught and played in the elementary schools. The physical education programs of the schools state that Dodge-ball helps in the development of a perfect balance among young children. However, the sport is really growing faster internationally. Nowadays, Dodge-ball has also emerged as one of the popular sports in middle and high schools along with colleges, as well.

Dodge-ball is played on a playground. The concept of the sport includes elimination of the players upon being hit by any opponent team member, or if in any case, they step out of the boundary of the playground in order to avoid being smacked by the ball that is thrown at them.
Generally, in this game, if a player is eliminated from the game due to any reason, he can re-enter the match, in case a player from the same team to catches an opponents throw. The sport of Dodge-ball normally includes the use of 1-10 dodge balls in the game. Does anyone knows, what is the exact size of a dodge-ball? It is really difficult to tell as there is no standard measurement of a dodge-ball till yet. But it is usually found that dodge-balls are roughly the size of volleyball.

A dodge-ball is filled with foam, which is further covered with a plastic shell from outside, whereas there are also some dodge-balls that are made up of rubber.

Dodge-ball is an indigenous game that does not bound the players between set frames of rules. The players can settle between anything that suits them all, may it be regarding the choice of ball, place, area or number of players in the team. This ensures that the players enjoy the game to its fullest, adding innovative rules and following unique gaming strategies.

The sport starts with all the included dodge balls lining up straight on the central dividing line also in some other versions of the game, the included dodge balls are thrown in the air for the players to catch and start the game by throwing the ball to their team mates. The moment, a player is full-on hit by a straight ball, he is eliminated from the game till the time any of his team mates hit an opponent player or catch the ball of the opponent team in the game. The sport of Dodge-ball ends when all the players of a team are eliminated during the game.

The Evolution of Direct3D

* UPDATE: Be sure to read the comment thread at the end of this blog, the discussion got interesting.

It’s been many years since I worked on Direct3D and over the years the technology has evolved Dramatically. Modern GPU hardware has changed tremendously over the years Achieving processing power and capabilities way beyond anything I dreamed of having access to in my lifetime. The evolution of the modern GPU is the result of many fascinating market forces but the one I know best and find most interesting was the influence that Direct3D had on the new generation GPU’s that support Welcome to Thunderbird processing cores, billions of transistors more than the host CPU and are many times faster at most applications. I’ve told a lot of funny stories about how political and Direct3D was created but I would like to document some of the history of how the Direct3D architecture came about and the architecture that had profound influence on modern consumer GPU’s.

Published here with this article is the original documentation for Direct3D DirectX 2 when it was first Introduced in 1995. Contained in this document is an architecture vision for 3D hardware acceleration that was largely responsible for shaping the modern GPU into the incredibly powerful, increasingly ubiquitous consumer general purpose supercomputers we see today.

D3DOVER
The reason I got into computer graphics was NOT an interest in gaming, it was an interest in computational simulation of physics. I Studied 3D at Siggraph conferences in the late 1980’s Because I wanted to understand how to approach simulating quantum mechanics, chemistry and biological systems computationally. Simulating light interactions with materials was all the rage at Siggraph back then so I learned 3D. Understanding light 3D mathematics and physics made me a graphics and color expert roomates got me a career in the publishing industry early on creating PostScript RIP’s (Raster Image Processors). I worked with a team of engineers in Cambridge England creating software solutions for printing color graphics screened before the invention of continuous tone printing. That expertise got me recruited by Microsoft in the early 1990’s to re-design the Windows 95 and Windows NT print architecture to be more competitive with Apple’s superior capabilities at that time. My career came full circle back to 3D when, an initiative I started with a few friends to re-design the Windows graphics and media architecture (DirectX) to support real-time gaming and video applications, resulted in gaming becoming hugely strategic to Microsoft. Sony Introduced in a consumer 3D game console (the Playstation 1) and being responsible for DirectX it was incumbent on us to find a 3D solution for Windows as well.

For me, the challenge in formulating a strategy for consumer 3D gaming for Microsoft was an economic one. What approach to consumer 3D Microsoft should take to create a vibrant competitive market for consumer 3D hardware that was both affordable to consumers AND future proof? The complexity of realistically simulating 3D graphics in real time was so far beyond our capabilities in that era that there was NO hope of choosing a solution that was anything short of an ugly hack that would produce “good enough” for 3D games while being very far removed from the ideal solutions mathematically we had implemented a little hope of seeing in the real-world during our careers.

Up until that point only commercial solutions for 3D hardware were for CAD (Computer Aided Design) applications. These solutions worked fine for people who could afford hundred thousand dollars work stations. Although the OpenGL API was the only “standard” for 3D API’s that the market had, it had not been designed with video game applications in mind. For example, texture mapping, an essential technique for producing realistic graphics was not a priority for CAD models roomates needed to be functional, not look cool. Rich dynamic lighting was also important to games but not as important to CAD applications. High precision was far more important to CAD applications than gaming. Most importantly OpenGL was not designed for highly interactive real-time graphics that used off-screen video page buffering to avoid tearing artifacts during rendering. It was not that the OpenGL API could not be adapted to handle these features for gaming, simply that it’s actual market implementation on expensive workstations did not suggest any elegant path to a $ 200 consumer gaming cards.

TRPS15In the early 1990’s computer RAM was very expensive, as such, early 3D consumer hardware designs optimized for minimal RAM requirements. The Sony Playstation 1 optimized for this problem by using a 3D hardware solution that did not rely on a memory intensive the data structure called a Z-buffer, instead they used a polygon level sorting algorithm that produced ugly intersections between moving joints. The “Painters Algorithm” approach to 3D was very fast and required little RAM. It was an ugly but pragmatic approach for gaming that would have been utterly unacceptable for CAD applications.

In formulating the architecture for Direct3D we were faced with difficult choices Similar enumerable. We wanted the Windows graphics leading vendors of the time; ATI, Cirrus, Trident, S3, Matrox and many others to be Able to Compete with one another for rapid innovation in 3D hardware market without creating utter chaos. The technical solution that Microsoft’s OpenGL team espoused via Michael Abrash was a driver called 3DDDI models (3D Device Driver Interface). 3DDDI was a very simple model of a flat driver that just supported the hardware acceleration of 3D rasterization. The complex mathematics associated with transforming and lighting a 3D scene were left to the CPU. 3DDDI used “capability bits” to specify additional hardware rendering features (like filtering) that consumer graphics card makers could optionally implement. The problem with 3DDDI was that it invited problems for game developers out of the gate. There were so many cap-bits every game that would either have to support an innumerable number of feature combinations unspecified hardware to take advantage of every possible way that hardware vendors might choose to design their chips producing an untestable number of possible hardware configurations and a consumer huge amount of redundant art assets that the games would not have to lug around to look good on any given device OR games would revert to using a simple set of common 3D features supported by everyone and there would be NO competitive advantage for companies to support new hardware 3D capabilities that did not have instant market penetration. The OpenGL crowd at Microsoft did not see this as a big problem in their world Because everyone just bought a $ 100,000 workstation that supported everything they needed.

The realization that we could not get what we needed from the OpenGL team was one of the primary could be better we Decided to create a NEW 3D API just for gaming. It had nothing to do with the API, but with the driver architecture underneath Because we needed to create a competitive market that did not result in chaos. In this respect the Direct3D API was not an alternative to the OpenGL API, it was a driver API designed for the sole economic purpose of creating a competitive market for 3D consumer hardware. In other words, the Direct3D API was not shaped by “technical” requirements so much as economic ones. In this respect the Direct3D API was revolutionary in several interesting ways that had nothing to do with the API itself but rather the driver architecture it would rely on.

When we Decided to acquire a 3D team to build with Direct3D I was chartered surveying the market for candidate companies with the right expertise to help us build the API we needed. As I have previously recounted we looked at Epic Games (creators of the Unreal engine), Criterion (later acquired by EA), Argonaut and finally Rendermorphics. We chose Rendermorphics (based in London) Because of the large number of 3D quality engineers and the company employed Because The founder, Servan Kiondijian, had a very clear vision of how consumer 3D drivers should be designed for maximum future compatibility and innovation. The first implementation of the Direct3D API was rudimentary but quickly intervening evolved towards something with much greater future potential.

D3DOVER lhanded
Whoops!

My principal memory from that period was a meeting in roomates I, as the resident expert on the DirectX 3D team, was asked to choose a handedness for the Direct3D API. I chose a left handed coordinate system, in part out of personal preference. I remember it now Only because it was an arbitrary choice that by the caused no end of grief for years afterwards as all other graphics authoring tools Adopted the right handed coordinate system to the OpenGL standard. At the time nobody knew or believed that a CAD tool like Autodesk would evolve up to become the standard tool for authoring game graphics. Microsoft had acquired Softimage with the intention of displacing the Autodesk and Maya anyway. Whoops …

The early Direct3D HAL (Hardware Abstraction Layer) was designed in an interesting way. It was structured vertically into three stages.

DX 2 HAL

The highest was the most abstract layer transformation layer, the middle layer was dedicated to lighting calculations and the bottom layer was for rasterization of the finally transformed and lit polygons into depth sorted pixels. The idea behind this vertical structure driver was to provide a relatively rigid feature path for hardware vendors to innovate along. They could differentiate their products from one another by designing hardware that accelerated increasingly higher layers of the 3D pipeline resulting in greater performance and realism without incompatibilities or a sprawling matrix of configurations for games to test against art or requiring redundant assets. Since the Direct3D API created by Rendermorphics Provided a “pretty fast” implementation software for any functionality not accelerated by the hardware, game developers could focus on the Direct3D API without worrying about myriad permutations of incompatible hardware 3D capabilities. At least that was the theory. Unfortunately like the 3DDDI driver specification, Direct3D still included capability bits designed to enable hardware features that were not part of the vertical acceleration path. Although I actively objected to the tendency of Direct3D capability to accumulate bits, the team felt extraordinary competitive pressure from Microsoft’s own OpenGL group and from the hardware vendors to support them.

The hardware companies, seeking a competitive advantage for their own products, would threaten to support and promote OpenGL to game developers Because The OpenGL driver bits capability supported models that enabled them to create features for their hardware that nobody else supported. It was common (and still is) for the hardware OEM’s to pay game developers to adopt features of their hardware unique to their products but incompatible with the installed base of gaming hardware, forcing consumers to constantly upgrade their graphics card to play the latest PC games . Game developers alternately hated capability bits Because of their complexity and incompatibilities but wanted to take the marketing dollars from the hardware OEM’s to support “non-standard” 3D features.

Overall I viewed this dynamic as destructive to a healthy PC gaming economy and advocated resisting the trend OpenGL Regardless of what the people wanted or OEM’s. I believed that creating a consistent stable consumer market for PC games was more important than appeasing the hardware OEM’s. As such as I was a strong advocate of the relatively rigid vertical Direct3D pipeline and a proponent of introducing only API features that we expected up to become universal over time. I freely confess that this view implied significant constraints on innovation in other areas and a placed a high burden of market prescience on the Direct3D team.

The result, in my estimation, was pretty good. The Direct3D fixed function pipeline, as it was known, produced a very rich and growing PC gaming market with many healthy competitors through to DirectX 7.0 and the early 2000’s. The PC gaming market boomed and grew to be the largest gaming market on Earth. It also resulted in a very interesting change in the GPU hardware architecture over time.

Had the Direct3D HAL has been a flat driver with just the model for rasterization capability bits as the OpenGL team at Microsoft had advocated, 3D hardware makers would have competed by accelerating just the bottom layer of the 3D rendering pipeline and adding differentiating features to their hardware capability via bits that were incompatible with their competitors. The result of introducing the vertical layered architecture THING that was 3D hardware vendors were all encouraged to add features to their GPU’s more consistent with the general purpose CPU architectures, namely very fast floating point operations, in a consistent way. Thus consumer GPU’s evolved over the years to increasingly resemble general purpose CPU’s … with one major difference. Because the 3D fixed function pipeline was rigid, the Direct3D architecture afforded very little opportunity for code branching frequent as CPU’s are designed to optimize for. Achieved their GPU’s amazing performance and parallelism in part by being free to assume that little or no branching code would ever occur inside a Direct3D graphics pipeline. Thus instead of evolving one giant monolithic core CPU that has massive numbers of transistors dedicated to efficient branch prediction has as an Intel CPU, GPU has a Direct3D Hundreds to Welcome to Thunderbird simple CPU cores like that have no branch prediction. They can chew through a calculation at incredible speed confident in the knowledge that they will not be interrupted by code branching or random memory accesses to slow them down.

DirectX 7.0 up through the underlying parallelism of the GPU was hidden from the game. As far as the game was concerned some hardware was just faster than other hardware but the game should not have to worry about how or why. The early DirectX fixed function pipeline architecture had done a brilliant job of enabling dozens of Disparate competing hardware vendors to all take different approaches to Achieving superior cost and performance in consumer 3D without making a total mess of the PC gaming market for the game developers and consumers . It was not pretty and was not entirely executed with flawless precision but it worked well enough to create an extremely vibrant PC gaming market through to the early 2000’s.

Before I move on to discussing more modern evolution Direct3D, I would like to highlight a few other important ideas that influenced architecture in early modern Direct3D GPU’s. Recalling that in the early to mid 1990’s was relatively expensive RAM there was a lot of emphasis on consumer 3D techniques that conserved on RAM usage. The Talisman architecture roomates I have told many (well-deserved) derogatory stories about was highly influenced by this observation.

Talsiman
Search this blog for tags “Talisman” and “OpenGL” for many stories about the internal political battles over these technologies within Microsoft

Talisman relied on a grab bag of graphics “tricks” to minimize GPU RAM usage that were not very generalized. The Direct3D team, Rendermorphics Heavily influenced by the founders had made a difficult choice in philosophical approach to creating a mass market for consumer 3D graphics. We had Decided to go with a more general purpose Simpler approach to 3D that relied on a very memory intensive a data structure called a Z-buffer to Achieve great looking results. Rendermorphics had managed to Achieve very good 3D performance in pure software with a software Z-buffer in the engine Rendermorphics roomates had given us the confidence to take the bet to go with a more general purpose 3D Simpler API and driver models and trust that the hardware RAM market and prices would eventually catch up. Note however that at the time we were designing Direct3D that we did not know about the Microsoft Research Groups “secret” Talisman project, nor did they expect that a small group of evangelists would cook up a new 3D API standard for gaming and launch it before their own wacky initiative could be deployed. In short one of the big bets that Direct3D made was that the simplicity and elegance of Z-buffers to game development were worth the risk that consumer 3D hardware would struggle to affordably support them early on.

Despite the big bet on Z-buffer support we were intimately aware of two major limitations of the consumer PC architecture that needed to be addressed. The first was that the PC bus was generally very slow and second it was much slower to copy the data from a graphics card than it was to copy the data to a graphics card. What that generally meant was that our API design had to growing niche to send the data in the largest most compact packages possible up to the GPU for processing and absolutely minimize any need to copy the data back from the GPU for further processing on the CPU. This generally meant that the Direct3D API was optimized to package the data up and send it on a one-way trip. This was of course an unfortunate constraint Because there were many brilliant 3D effects that could be best accomplished by mixing the CPU’s branch prediction efficient and robust floating point support with the GPU’s parallel rendering incredible performance.

One of the fascinating Consequences of that constraint was that it forced the GPU’s up to become even more general purpose to compensate for the inability to share the data with the CPU efficiently. This was possibly the opposite of what Intel intended to happen with its limited bus performance, Because Intel was threatened by the idea that the auxiliary would offload more processing cards from their work thereby reducing the CPU’s Intel CPU’s value and central role to PC computing. It was reasonably believed at that time that Intel Deliberately dragged their feet on improving PC performance to deterministic bus a market for alternatives to their CPU’s for consumer media processing applications. Earlier Blogs from my recall that the main REASON for creating DirectX was to Prevent Intel from trying to virtualize all the Windows Media support on the CPU. Intel had Adopted a PC bus architecture that enabled extremely fast access to system RAM shared by auxiliary devices, it is less Likely GPU’s that would have evolved the relatively rich set of branching and floating point operations they support today.

To Overcome the fairly stringent performance limitations of the PC bus a great deal of thought was put into techniques for compressing and streamlining DirectX assets being sent to the GPU performance to minimize bus bandwidth limitations and the need for round trips from the GPU back to the CPU . The early need for the rigid 3D pipeline had Consequences interesting later on when we Began to explore assets streaming 3D over the Internet via modems.

We Recognized early on that support for compressed texture maps would Dramatically improve bus performance and reduce the amount of onboard RAM consumer GPU’s needed, the problem was that no standards Existed for 3D texture formats at the time and knowing how fast image compression technologies were evolving at the time I was loathe to impose a Microsoft specified one “prematurely” on the industry. To Overcome this problem we came up with the idea of ​​”blind compression formats”. The idea, roomates I believe was captured in one of the many DirectX patents that we filed, had the idea that a GPU could encode and decode image textures in an unspecified format but that the DirectX API’s would allow the application to read and write from them as though they were always raw bitmaps. The Direct3D driver would encode and decode the image data is as Necessary under the hood without the application needing to know about how it was actually being encoded on the hardware.

By 1998 3D chip makers had begun to devise good quality 3D texture formats by DirectX 6.0 such that we were Able to license one of them (from S3) for inclusion with Direct3D.

http://www.microsoft.com/en-us/news/press/1998/mar98/s3pr.aspx

DirectX 6.0 was actually the first version of DirectX that was included in a consumer OS release (Windows 98). Until that time, DirectX was actually just a family of libraries that were shipped by the Windows games that used them. DirectX was not actually a Windows API until five generations after its first release.

DirectX 7.0 was the last generation of DirectX that relied on the fixed function pipeline we had laid out in DirectX 2.0 with the first introduction of the Direct3D API. This was a very interesting transition period for Direct3D for several could be better;

1) The original founders DirectX team had all moved on,

2) Microsoft’s internal Talisman and could be better for supporting OpenGL had all passed

3) Microsoft had brought the game industry veterans like Seamus Blackley, Kevin Bacchus, Stuart Moulder and others into the company in senior roles.

4) Become a Gaming had a strategic focus for the company

DirectX 8.0 marked a fascinating transition for Direct3D Because with the death of Talisman and the loss of strategic interest in OpenGL 3D support many of the people from these groups came to work on Direct3D. Talisman, OpenGL and game industry veterans all came together to work on Direct3D 8.0. The result was very interesting. Looking back I freely concede that I would not have made the same set of choices that this group made for DirectX 8.0 in chi but it seems to me that everything worked out for the best anyway.

Direct3D 8.0 was influenced in several interesting ways by the market forces of the late 20th century. Microsoft largely unified against OpenGL and found itself competing with the Kronos Group standards committee to advance faster than OpenGL Direct3D. With the death of SGI, control of the OpenGL standard fell into the hands of the 3D hardware OEM’s who of course wanted to use the standard to enable them to create differentiating hardware features from their competitors and to force Microsoft to support 3D features they wanted to promote. The result was the Direct3D and OpenGL Became much more complex and they tended to converge during this period. There was a stagnation in 3D feature adoption by game developers from DirectX 8.0 to DirectX 11.0 through as a result of these changes. Became creating game engines so complex that the market also converged around a few leading search providers Including Epic’s Unreal Engine and the Quake engine from id software.

Had I been working on Direct3D at the time I would have stridently resisted letting the 3D chip lead Microsoft OEM’s around by the nose chasing OpenGL features instead of focusing on enabling game developers and a consistent quality consumer experience. I would have opposed introducing shader support in favor of trying to keep the Direct3D driver layer as vertically integrated as possible to Ensure conformity among hardware vendors feature. I also would have strongly opposed abandoning DirectDraw support as was done in Direct3D 8.0. The 3D guys got out of control and Decided that nobody should need pure 2D API’s once developers Adopted 3D, failing to recognize that simple 2D API’s enabled a tremendous range of features and ease of programming that the majority of developers who were not 3D geniuses could Easily understand and use. Forcing the market to learn 3D Dramatically constrained the set of people with the expertise to adopt it. Microsoft later discovered the error in this decision and re-Introduced DirectDraw as the Direct2D API. Basically letting the Direct3D 8.0 3D design geniuses made it brilliant, powerful and useless to average developers.

At the time that the DirectX 8.0 was being made I was starting my first company WildTangent Inc.. and Ceased to be closely INVOLVED with what was going on with DirectX features, however years later I was Able to get back to my roots and 3D took the time to learn Direct3D programming in DirectX 11.1. Looking back it’s interesting to see how the major architectural changes that were made in DirectX 8 resulted in the massively convoluted and nearly incomprehensible Direct3D API we see today. Remember the 3 stage pipeline DirectX 2 that separated Transformation, lighting and rendering pipeline into three basic stages? Here is a diagram of the modern DirectX 11.1 3D pipeline.

DX 11 Pipeline

Yes, it grew to 9 stages and 13 stages when arguably some of the optional sub-stages, like the compute shader, are included. Speaking as somebody with an extremely lengthy background in very low-level 3D graphics programming and I’m Embarrassed to confess that I struggled mightily to learn programming Direct3D 11.1. Become The API had very nearly incomprehensible and unlearnable. I have no idea how somebody without my extensive background in 3D and graphics could ever begin to learn how to program a modern 3D pipeline. As amazingly powerful and featureful as this pipeline is, it is also damn near unusable by any but a handful of the most antiquated brightest minds in 3D graphics. In the course of catching up on my Direct3D I found myself simultaneously in awe of the astounding power of modern GPU’s and where they were going and in shocked disgust at the absolute mess the 3D pipeline had Become. It was as though the Direct3D API had Become a dumping ground for 3D features that every OEM DEMANDED had over the years.

Had I not enjoyed the benefit of the decade long break from Direct3D involvement Undoubtedly I would have a long history of bitter blogs written about what a mess my predecessors had made of a great and elegant vision for the consumer 3D graphics. Weirdly, however, leaping forward in time to the present day, I am forced to admit that I’m not sure it was such a bad thing after all. The result of stagnation gaming on the PC as a result of the mess Microsoft and the OEMs made of the Direct3D API was a successful XBOX. Having a massively fragmented 3D API is not such a problem if there is only one hardware configuration to support game developers have, as is the case with a game console. Direct3D shader 8.0 support with early primitive was the basis for the first Xbox’s graphics API. For the first selected Microsoft’s XBOX NVIDIA NVIDIA chip giving a huge advantage in the 3D PC chip market. DirectX 9.0, with more advanced shader support, was the basis for the XBOX 360, Microsoft roomates selected for ATI to provide the 3D chip, AMD this time handing a huge advantage in the PC graphics market. In a sense the OEM’s had screwed Themselves. By successfully Influencing Microsoft and the OpenGL standards groups to adopt highly convoluted graphics pipelines to support all of their feature sets, they had forced Themselves to generalize their GPU architectures and the 3D chip market consolidated around a 3D chip architecture … whatever Microsoft selected for its consoles.

The net result was that the retail PC game market largely died. It was simply too costly, too insecure and too unstable a platform for publishing high production value games on any longer, with the partial exception of MMOG’s. Microsoft and the OEM’s had conspired together to kill the proverbial golden goose. No biggie for Microsoft as they were happy to gain complete control of the former PC gaming business by virtue of controlling the XBOX.

From the standpoint of the early DirectX vision, I would have said that this outcome was a foolish, shortsighted disaster. Microsoft had maintained a little discipline and strategic focus on the Direct3D API they could have ensured that there were NO other consoles in existence in a single generation by using the XBOX XBOX to Strengthen the PC gaming market rather than inadvertently destroying it. While Microsoft congratulates itself for the first successful U.S. launch of the console, I would count all the gaming dollars collected by Sony, Nintendo and mobile gaming platforms over the years that might have remained on Microsoft platforms controlled Microsoft had maintained a cohesive strategy across media platforms. I say all of this from a past tense perspective Because, today, I’m not so sure that I’m really all that unhappy with the result.

The new generation of consoles from Sony AND Microsoft have Reverted to a PC architecture! The next generation GPU’s are massively parallel, general-purpose processors with intimate access to the shared memory with the CPU. In fact, the GPU architecture Became so generalized that a new pipeline stage was added in DirectX 11 DirectCompute called that simply allowed the CPU to bypass the entire convoluted Direct3D graphics pipeline in favor of programming the GPU directly. With the introduction of DirectCompute the promise of simple 3D programming returned in an unexpected form. Modern GPU’s have Become so powerful and flexible that the possibility of writing cross 3D GPU engines directly for the GPU without making any use of the traditional 3D pipeline is an increasingly practical and appealing programming option. From my perspective here in the present day, I would anticipate that within a few short generations the need for the traditional Direct3D and OpenGL APIs will vanish in favor of new game engines with much richer and more diverse feature sets that are written entirely in device independent shader languages ​​like Nvidia’s CUDA and Microsoft’s AMP API’s.

Today, as a 3D physics engine and developer I have never been so excited about GPU programming Because of the sheer power and relative ease of programming directly to the modern GPU without needing to master the enormously convoluted 3D pipelines associated with Direct3D and OpenGL API’s. If I were responsible for Direct3D strategy today I would be advocating dumping the investment in traditional 3D pipeline in favor of Rapidly opening direct access to a rich GPU programming environment. I personally never imagined that my early work on Direct3D, would, within a couple decades, Contribute to the evolution of a new kind of ubiquitous processor that enabled the kind of incredibly realistic and general modeling of light and physics that I had learned in the 1980 ‘s but never believed I would see computers powerful enough to models in real-time during my active career.

Radeon HD 8970M Mobile Graphics Chip Fastest Claimed

Jakarta – Radeon HD8000 officially announced, but not for the desktop computer but notebook users. It was not just one variant, there are some models that are ready to fill the mainstream segment, performance, and enthusiast.

In the mainstream segment, AMD Radeon HD 8500M prepare and HD 8600M. Entrusted to fill class performance on the Radeon HD 8700M and HD 8800M. While the top-class enthusiast who inhabited occupied by the Radeon HD 8970M.

Interestingly, AMD Radeon HD 8970M claimed as the fastest mobile graphics chips today. The claim was based on the known specs diusungnya rely kecepetan clock 850 MHz (900 MHz when turbo mode is active), 1280 stream processors and RAM speed of 1.2 GHz.

This specification combinations touted able to achieve a score of 2304 GFLOPS performance on a single computational precision. While achievement in computing by 144 GFLOPS double precision.

While the first notebook product will apply, as quoted from MaximumPC, Wednesday (15/05/2013), is MSI GX70 which is present as a gaming notebook. To prove his supposedly MSI also bundled game Crysis 3 in the product.

Acer launches new Iconia A1-810

Acer Sales and Services Sdn Bhd (Acer) has recently launched a brand new 7.9 inch tablet that is supposed to be a cheaper alternative to its contenders Note 3 and Mini.

At half the price of an iPad, Acer Iconia comes with a Quad-Core CPU, HDMI output and the latest version of Android 4.2.2.

The design of the tablet is a traditional one noted Ting Meng Hung, regional manager for Acer and noted that it was quite thick compared to other tablets yet it is very light.

As for the hook-up there is a micro SD card slot, HDMI output and both a front and a rear webcam. The rear webcam is five megapixels (MP )and can record videos of up at 30 frames per second.

The tablet has a 7.9’ multi-touch five point widescreen that is intuitive and launches the apps instantly.

“An interesting characteristic similar to other tablets is that if the corner of the touch screen is kept pressed with your finger the rest of the screen reacts to other commands like scroll, launch or zoom,” Ting noted.

The tablet is equipped with a -Core CPU and one gigabyte (GB) RAM, ensuring the smooth rendering of high-quality apps and complex games.

The internal memory ranges starts from 8GB and can be extended via SD card up to 32GB.

For the operating system, it comes together with the latest 4.2.2. that run optimally with the component, Ting highlighted.

For this minimal price we have here a Tablet equipped with HDMI hook-up, Bluetooth and GPS and a Quad-Core CPU as well as a webcam that records Full-HD videos.

“At the moment we are only offering the wifi model and depending on the demand we may bring in the 3G version of the tablet as well,” Ting explained.

The Iconia is priced at RM599 and for more information please call Acer’s office at 082-456700 or visit them at Green Heights Commercial Centre, Lorong Lapangan Terbang 2, 93250, Kuching.

 

Effective Steps to Get Cheap Auto Insurance and Cheap Car Insurance Quotes

In purchasing auto insurance, you must take action smartly. Each person while shopping wishes to pay less and obtain more therefore they put forth time and effort to find cheap auto insurance. If you want that the cash you use is worth your buy, then be vigilant while shopping. Obtaining cheap car insurance quotes is not an unfeasible task. But, while buying the quotes, evaluate them to get the best one.

Comparing and evaluating must be completed in terms of coverage, price and discounts. There is a principle among different people that just by looking at the prices, you can weigh against them and make the buy. But, this principle is not successful. For acquiring the most excellent deal and making correct decision, you must try to collect more and more of the quotes. The Internet is a great help in formulating a good decision. A lot of auto insurance sites present free online quote comparison, and there is no requirement to buy the insurance once you get your quotes.

The following are some of the steps on how to get cheap auto insurance and cheap auto insurance quotes. Read through!

The first step is to get a consistent comparison site. Read and understand the websites different pages explaining the process of comparison and what is offered. You will require performing state-specific comparisons for your state. Make sure the company provides adequate information in every quote so that you can come up with a knowledgeable decision. In addition, you will have to fill in a form stating your car type and year, general information, possible discount information and a lot more. The information you provide will help the providers present you an accurate quote. But remember that the real price might still be lower or higher as you look into different providers further.

Evaluating the types of coverage and pricing is the next step. The types of coverage you choose can make a big difference quotes. When comparing cheap auto insurance, stay away from add-ons and choose only the coverage you need. These consist of comprehensive and towing. Comprehensive coverage is helpful if you reside in a place that is vulnerable to bad weather or if you reside in a place with high rates of crime. This means that comprehensive coverage might be worth the extra investment if your automobile is at risk of being stolen because of where you live.

The last step to get cheap auto insurance is to increase your deductible. Insurers will regularly lower your premium if you start to accept a few of the financial burden of repairs. High deductibles are equally helpful because you assume a few of the risk while also enjoying the benefits of lower premiums.

Extreme Wheelchair Sports

Wheelchair sports were first recognized in 1957, with the first Wheelchair Games taking place in New York City. Since then, the sport has evolved from wheelchair basketball and wheelchair soccer, to extreme sports that are utterly unpredictable. That is why Wheelchair Extreme Sports have earned such respect and are emerging as competitive sports that are to be taken seriously. The best thing about Wheelchair Extreme Sport is that it doesnt demand a crazy amount of media attention or ask to be recognized, but simply continues to make its mark by achieving incredible feats never believed to be possible.

The thrills of 4-wheel downhill mountain biking is mind-blowing. Its secret is a 4 wheelers low centre of gravity and stability, which proves to be phenomenally useful on the bends. The scene is rife with techno-weenies, who know all the ways in which to invent and re-invent faster 4-wheelers, guaranteed to show any gravel track what real speed feels like. The fitness of wheelchair athletes is remarkable, not to mention their outstanding upper body strength and relentless endurance on epic 4-wheel races. If seeing is believing, is a phrase that applies to you, then the UK is where its at. The recently founded, Rough Riderz club, aims to promote downhill biking whether its on 2 or 4 wheels and to create an all-inclusive sporting environment, because, lets get real whether youre going to jet down a steep dirt track at a speed where only tunnel vision is possible, on 2 or 4 wheels, youre still pushing the limits of, extreme, and thats what its all about.

August 26 2010, was an unforgettable day for the wheelchair sports arena: Aaron Fotheringham completed the worlds first-ever double backflip in a wheelchair. It was sick. The 18 year old from Las Vegas earned himself the nickname, Wheels, because of his unrelenting passion for all things extreme. For athletes like Aaron, its always about conquering a higher ramp and performing the ultimate power-slide while churning the maximum amount of adrenaline through his system. Wicked.

Extreme sports are the driving force behind the evolution of the sporting world because its an environment that redefines and recreates itself over and over again. The disabled mobility sports arena is doing just that defying physical limitations, staring danger in the face, and saying, bring it on. Although Wheelchair Extreme Sports havent emerged as prominently in South Africa, as in Europe and the USA, I hear that the adrenaline-bug travels fast. So watch this space