Xrefs In A Mixed Environment

 

Seeing a consistent issue on a fresh install of AutoCad Architecture 2009 Update 2 (part of the Revit 2009 suite). System details: Newly built machine using the X38 Chipset, Quattro 1700 GPU, Intel E8600 CPU, 4GB DDR2 800 RAM (showing 3.25GB in OS). OS is Windows XP Professional SP3, all current critical patches. DotNet 2.0 and 3.0 fully updated. Most recent drivers for both chipset and GPU from NVidia as of October 31. LAN is running on 1Gb speeds and traffic is very low during the process. Using AutoCad Arch 2009 Update 2.

Office is a mixed environment, Xref and main drawing files are in 2D, shared across the network. Everything and everyone is using AC 2007 formats. The Problem: On the ACA 2009 system, everytime a project Xref is updated, on reloading into the main drawing ACA freezes, CPU usage goes to 100% on both cores, GUI is unresponsive, and memory use slowly increases.

Mixed

It will stay in this state until the system hard crashes (out of memory) which can take 30 minutes, or until the acad process is killed in Task Manager. Same files work fine on older systems using ACA 2007. Repro Steps: 1) Open the file ADRAMain602-CH.dwg 2) Open the xRef file ADRxrefsubstructure01.dwg 3) Draw in ADRxrefsubstructure01.dwg, save it 4) In ADRAMain602-CH.dwg, type xref, right click the xref just saved, hit ‘reload’ Crash. Steps taken so far: We have audited / purged all project files in ACA 2009. NO errors are being found at all. We've also XCLIPed all files where possible to reduce load. WHIPTHREAD has been set to 1, 2 and 3 with no change in crash status.

Environmental

Has anyone else seen this issue with ACA 2009? If so, do you know a fix or workaround?

Thanks in advance for any advice you all may be able to give. How big is the xref? I often crash when loading a large xref.my computer specs are very similar to yours and my problem is a memory issue.and try getting rid of unneeded layer filters.and the scale issue stated above in not uncommon Will try the suggestion you and feargt made then. The xref file ADRxrefsubstructure01.dwg is about 327Kb in size. Memory use on the system during the start of the reload process hovers around 540Mb, gradually rises during the freeze to the 3.25GB limit. As stated though, on older AC versions, same files, we don't see this issue at all.

The reload takes less than 5 seconds using AC 2007 - so unless the AutoDesk dev team really changed how memory is used (cough) then I am skeptical that memory limitations are the issue. Browsing the AC site, I don't see any hotfixes for a reload memory leak, but that may be what we are seeing here.

I'm new here but I was searching on this subject and this is the first thing that popped up. We have AC2009 here and it has created a bit of havoc.

Along with it being a major resource hog and buggy as hell, it has this wonderful feature of simply disappearing upon reloading an xref. No error, no recovery file, nada. I'll be checking in from time to time to see if anyone has managed to find a solution or if I can pass along some info on this. Time to do some poking around this board and see what can make my brain tingle a bit.

If your tank is moving around too much, your firepower has a slight lag per tap. Luckily, you get go back to the same level if you run out of lives. Use powerups like speed, extra life, shield, larger ammo, upgrade firepower to aid you in defeating your enemies. Playing this game from start to finish is quite hard not to die. Tap faster to send ammo to the enemies quicker. Bootant on twitter beecells for mac os x.

Compatibility Issues in Mixed Environments Glossary. Mixed environment: A computer environment, usually a network, in which the operating systems of different machines are based on different character encodings. As with other new technologies, it will take some time before Unicode emerges as the predominant character-encoding standard. Even though the first operating system based on Unicode/ISO 10646—Windows NT—shipped the same year the standard was published, other operating systems are developing and adapting more slowly. There will be a lengthy transition period during which a lot of existing data and older software will be unable to take advantage of Unicode, but this new standard will ultimately revolutionize the way the software industry represents text. The impact of Unicode will be comparable in magnitude to the impact of ASCII 30 years ago. For this reason, compatibility between Unicode and other character-encoding standards is crucial.

Unicode's first 256 characters correspond one-to-one with ANSI, the only exception being the ISO C1 control code range 0x80 through 0x9F. The first 256 Unicode characters have exactly the same layout as that of ISO 8859/1, which served as the basis for the Western European Windows ANSI (or Latin 1) code page. Any character encoded in the Windows code pages, including Far East editions, can be represented in Unicode. Compatibility is achieved through mapping and conversion.

For example, Windows NT and Windows 95 carry tables that map characters between Unicode and local code pages. Because Microsoft Windows NT Workstation and Microsoft Windows NT Advanced Server contain full support for Unicode, and Windows 95 contains only partial support, mixed environments with a Unicode-based server and non-Unicode clients are probable. In such a scenario, data passed between client and server must be converted. Rather than require a Unicode server to understand all possible local code pages, conversion is the responsibility of the client. Each client carries tables that map between its local code page and the corresponding subset of Unicode. There is no need for code-page information to be part of the network protocol. Whereas it is always possible to convert non-Unicode data to Unicode, it is not always possible to accomplish the reverse.

Xrefs In A Mixed Environmental Science

For example, if a client is running the Czech edition of Windows 3.1 (based on the Windows Latin 2 code page), any data it sends to a Windows NT server will be stored in Unicode format. If the Windows NT server then sends the data to a client that's running the Swedish edition of Windows 3.1 (based on the Windows Latin 1 code page), that client will convert as many Latin 2–specific characters as it can and display the remaining characters as default characters. You can call GetCPInfo to determine the default character for a particular code page. In some cases the default character is a question mark, so remember to be careful when mapping a character that might be part of a filename.

The file system in Windows uses an underscore as the default character. Like Windows NT, your software must interact seamlessly in a mixed environment. The basic approach involves adding compatibility features, such as data conversion from old file formats and data interchange with non-Unicode programs. Not all system services, fonts, and tools will provide a uniform level of Unicode support in the near future.

Even Windows NT currently falls short in ways that are sometimes confusing. For example, it allows you to sort text according to the Russian algorithm, but Cyrillic fonts are not part of the system's standard installation—you have to install the Unicode Sans Lucida font manually in order to display Russian text. (The exception to this is the International English edition available in Central and Eastern Europe, which does include Cyrillic fonts.) On the other hand, the Lucida Sans Unicode font contains Hebrew characters, but Windows NT does not currently support the layout of Hebrew text. As well, Windows NT supports the Win32 API entry points for Unicode, but Windows 95 does not. These inconsistencies can be aggravating, but they will gradually disappear, and in the long term, non-Unicode applications will become the bottlenecks that inconvenience users, just as applications that are not DBCS-enabled inconvenience many users today.