Exchange 2000 Server and Exchange Server 2003 were the first versions to include varying levels of administrative control. As Exchange Server evolved, so did these controls.
Exchange Server 2007 includes four different administrative roles; being familiar with each will only help you grasp the changes Microsoft made to Exchange Server 2010.
Exchange Server 2007 administrative roles include:
* Exchange Organization Administrator: As the highest level of control over an Exchange Server 2007 organization, this role has no restrictions. Exchange Organization Administrator is the Exchange 2007 equivalent to the Exchange Full Administrator role in Exchange 2000 Server and Exchange 2003.
* Exchange Recipient Administrator: This role is intended for performing day-to-day management tasks like creating mailboxes. It allows you to work with users, groups, contacts and public folders, but doesn't allow organization- or server-level administration.
* Exchange Server Administrator: Exchange 2000 and Exchange Server 2003 had the similarly named Exchange Administrator role; however, that changed in Exchange Server 2007. In Exchange 2007, an admin can be given administrative permission over individual Exchange servers without being able to make organizational-level changes. Admins with this role are also prohibited from uninstalling Exchange.
* Exchange View Only Administrator: This role was carried over from Exchange 2000 Server and Exchange 2003. It provides you read-only access to an entire Exchange organization. Since this role doesn't let you make any changes to the Exchange organization, it's primarily used for training purposes.
The administrative permissions used in Exchange Server 2007 are an improvement over what was previously available, but the permissions aren't exactly granular. To achieve granular control over Exchange management permissions, many organizations combine a few of these permissions and access control lists (ACLs). Although this technique works, it's complicated and may have unintentional side effects if implemented incorrectly.
Cloud computing is the latest big thing among IT professionals. But unlike other recent hot trends you may have heard about (for example, tablet computing, voice recognition and quantum computing, among others), cloud computing offers turnkey solutions — fast.
What is cloud computing? Simply put, it means running some of your software applications and servers using computer space leased from a data center someone else operates.
Cloud Computing Basics
With cloud computing, you don’t have to worry about buying, installing and managing physical servers in your office; you just log in to the cloud servers with the username and password that your provider gives you.
Cloud computing is a broad term that can cover everything from Web mail services, such as Hotmail®, and online application providers, such as Google Docs™ and Salesforce.com, to companies that host servers. There is one key element all cloud computing offerings share: They allow businesses to quickly, easily and inexpensively leverage world-class services.
For example, businesses can use Dell cloud computing services to satisfy email needs; remotely manage all workstations; manage leads with Salesforce.com; or share documents worldwide with Google Docs. With cloud computing, businesses can use best-of-breed solutions without building out expensive infrastructure themselves.
That means your business could use:
Dell cloud computing services to satisfy your email needs or remotely manage all your workstations
Salesforce.com to manage customer leads
Google Docs for sharing documents worldwide
Below is a list of some of the pros and cons of cloud computing.
Cloud Computing Advantages:
Storage Costs ― You can pay a monthly fee and use a software-as-a-service (SaaS) model. Software is provided and maintained by the provider. No need to worry about purchasing new software or upgrades. You can also take advantage of any free applications your provider offers, and you don’t have to worry about downloading fixes and patches.
Accessibility ― Road warriors, remote employees, intra-office workers and field workers can easily access crucial documents and information whether they are in the office or not. Cloud computing also greatly increases workers’ ability to collaborate with other employees or clients. Workers can tap into software and add data using many different mobile devices regardless of location.
Scalability ― If your company’s IT needs vary from month to month, cloud computing can accommodate periods of very high or very low IT demands.
Cloud Computing Disadvantages:
Savings ― Using a cloud computing service provider to store massive amounts of data can be expensive, not only because of the price of the physical storage, but also because you have to pay to access your data. If you use large amounts of storage, you might want to buy and manage it yourself
Internet Instability ― Because you access the cloud through the Web, any disruptions to your Internet service will cause delays. As a result, cloud computing might not work for companies that need 100 percent uptime.
Software Limitations ― If Microsoft® Office is the cornerstone of your business productivity, then cloud computing may not be a good choice. Microsoft Word and Excel® aren’t offered in the cloud.
Cloud Computing Security
Since cloud computing is all about putting business-critical applications and data out on the Internet, security becomes critical. Focus on selecting a service provider with redundant servers and continuous automated data backup. Be confident that no one else can access confidential business data stored in the cloud. You should also seek assurances that if the cloud computing provider accidentally discloses information, the provider will assume responsibility and liability. Security can never be perfect, but your contract must cover liability if there is a lapse of any kind.
Cloud Computing Providers
You should approach buying cloud computing services as you would any other outsourcing decision. Select providers that offer the required features, whether those include email services, laptop data encryption or full customer relationship management solutions. Then evaluate the provider as a business. Think of the following questions:
How reliable is the cloud computing provider?
What contingency plans does the provider offer in the event of problems?
How long has it been in business?
Consider these Software-as-a-Service providers for the following cloud computing services:
Customer relationship management
Sales Force Automation ― Salesforce.com, Zoho, Xactly
Service and Support ― Salesforce.com, ServiceNow
Marketing Automation ― Eloqua
Collaboration and communication
Web Conferencing ― GoToMeeting, WebEx, DimDim
Email ― Rackspace Mail, Zimbra
Suites ― Google Apps, Zoho, Microsoft Business Productivity Online Suite, Lotus
Kernel memory resource bottlenecks can drastically limit Exchange 2003 scalability. Kernel resource usage may vary greatly from one Exchange server to another. A hardware platform that can support 4000 heavy users in one organization may be limited to half that number in a different organization because of kernel memory exhaustion.
This flash is the first in a series of three. These flashes are important reading for everyone who supports or administers large scale Exchange servers.
Large increases in kernel memory consumption can be triggered by changes that few would anticipate as problematic. This could cause sudden and widespread Exchange server outages throughout an organization.
The purpose of this initial flash is to introduce the issue and provide technical background. The second and third articles in this series will address common factors that either limit or consume kernel memory, and provide specific advice about optimizations for better management of kernel memory. There may be additional articles in the series, as needed.
This article applies specifically to Exchange Server 2003 running on Windows Server 2003. However, much of the information presented here applies generally to application scalability on a 32-bit computing architecture.
The second flash in this series will discuss some new hardware features available on recent servers. Hot-add RAM and the installation of more than 4 gigabytes of RAM can consume large additional amounts of Windows kernel memory. The second flash will explain how these features work and how to optimize Exchange for them. This flash will be released shortly.
The third flash will explain how large security tokens presented by clients can quickly exhaust kernel memory, and provide recommendations for reducing average token size. This flash will be available near the 14th of December.
Personal computer hardware continues to improve rapidly and dramatically in speed and storage capacity. But one thing that hasn't changed is the 32-bit processor and operating system architecture in the majority of Intel and AMD based computers used today.
Hardware performance is no longer the most important computing bottleneck. Instead, the theoretical limits of a 32-bit architecture define the ceiling on application speed and scalability.
The problem with a 32-bit architecture is that an application can juggle a maximum of only four billion bytes of information at once. For complex applications that service thousands of simultaneous users, four billion is not very much.
It has taken 20 years for general computing needs to outgrow the 32-bit architecture. The last quantum jump from 16-bit to 32-bit computing was a necessary precondition for enabling the sophisticated applications we depend on every day. Going from 16-bit to 32-bit allowed programs to go from handling about 64,000 pieces of information at once to handling four billion--a multiplier of 64 million. The next jump to 64-bit computing will allow applications to handle four billion times as much information as they do today.
Understanding the theoretical limitations of 32-bit architectures has not been very important to most people. Until recently, the ceiling on application scalability has been set by the performance limitations of processors, disks and networks. Theoretical 32-bit limits have not had a chance to come into play. But state of the art hardware can now process information so rapidly that everyone who works with large applications today needs a basic working knowledge of how memory works in a 32-bit world.
FREQUENTLY ASKED QUESTIONS (FAQ)
Why is a 32-bit architecture limited to 4 gigabytes of memory?
Before answering that, it is important to distinguish between memory address space and physical memory.
Each byte of memory in a computer must have a unique address so that applications can keep track of and identify the memory. In a 32-bit computer, the memory addresses are 32 bits long and stored as binary (base 2) numbers. There are approximately 4 billion possible different 32-bit binary numbers (2 raised to the 32nd power is 4,294,967,296). This accounts for the 4 gigabyte limit for addressable memory in a 32-bit computer.
The amount of physical memory on the computer is not related to the amount of memory address space. If a computer has 256 megabytes of physical memory, there is still a 4 gigabyte memory address space. If a computer has 8 gigabytes of physical memory, there is still a 4 gigabyte memory address space.
What happens when you run out of physical memory?
When all physical RAM in a computer is in use, Windows starts using the hard disk as if it were additional RAM. This is the purpose of the pagefile (also called the swap file). This means that the actual limit on the memory used by all applications is the amount of RAM installed plus the maximum size of the pagefile.
Generally, RAM memory is hundreds of times faster than the hard disk. Therefore, using the pagefile to relieve memory pressure incurs a significant performance penalty. One of the most effective things you can do to improve performance is ensure that there is enough RAM available to avoid frequent paging (swapping) of memory contents between disk and RAM.
How do Windows applications cooperate to share the 4 gigabytes of memory address space?
Instead, each process is isolated from the rest and has its own 4 gigabyte address space. This means that the 4 gigabyte addressability limit applies on a per-application basis, not across all applications taken together.
Each process is assigned an address space of 4 gigabytes of virtual memory, regardless of the amount of available physical memory. Applications are not allowed direct access to physical memory.
How does the 4 gigabyte address space map to a computer's physical memory?
Windows controls physical memory resources (RAM and the paging file) and carefully allocates these resources. Applications are granted access to physical memory resources only as needed, not in advance.
When an application requests more memory, Windows maps some physical memory (as long as some is available) into the process's address space. In essence, the virtual address is linked to a physical memory address. Windows maintains several tables that keep track of all of this, and the application knows only about the virtual memory address.
If both RAM and the paging file are completely full when an application needs more memory, an error will occur because of memory exhaustion.
In theory, it is possible for multiple applications to each request enough memory fill their entire address spaces. In practice, no server would be able to satisfy all those simultaneous requests.
How much memory does Exchange need?
Exchange is a very scalable application. It can be used to serve a few dozen clients or thousands. Its memory requirements increase in proportion to the work you want Exchange to do.
With current disk and server hardware, you can keep scaling Exchange up to the limits of its 32-bit maximum address space.
Memory usage for all Windows applications can be divided into two fundamental categories: kernel memory and user (application) memory.
Kernel memory is owned by the Windows operating system, and is used to provide system services to applications. All applications need to make use of kernel resources. Therefore, kernel memory is mapped into each application's address space so that the application can see and call on system resources.
By default, a full half of the virtual address space (2 gigabytes) for each application is dedicated to the Windows kernel. The other half of the address space is user memory. This is where the application loads all of its own code and data
It is possible to run out of kernel memory well before running out of user memory, or vice versa. There are trade-offs between kernel and user memory that have to be carefully balanced on a large scale Exchange server.
A large scale Exchange server is defined here as one that is handling so much traffic that it is in danger of exhausting either user mode memory addresses or kernel mode resources.
What happens when Exchange gets close to running out of user address space?
It becomes more and more difficult to allocate additional memory. Allocations have to be made in smaller, less efficient blocks. Shortly before the address space is completely exhausted, virtual memory fragmentation will cause new memory allocations to fail entirely. Exchange must then be re-started. But this is only a temporary solution. After a period of time, the load on the server will cause the same problem to happen again.
To permanently solve the problem you must reduce the load on the server or you must obtain additional address space. You can get additional address space by borrowing it from the kernel.
Windows 2000 Advanced Server and Datacenter editions, and all editions of Windows 2003 (Standard, Enterprise and Datacenter) support a 4GT (4 Gigabyte Tuning) through the /3GB startup switch in the server's boot.ini file.
Instead of giving half of the address space to the kernel and half to the application, the /3GB switch allocates 1 gigabyte to the kernel and 3 gigabytes to each application. By increasing the user address space by 50%, you can continue to scale an Exchange server well beyond the limits of the default memory configuration. But there is a trade-off: you have now reduced available kernel resources.
How does the /3GB switch affect kernel resources?
Several of the most critical memory resources or pools in the kernel are pre-allocated as Windows starts. The size of these pools is set based on the address space allocated for the kernel. You cannot change the size of these pools without reconfiguring and rebooting the server.
If you set the /3GB switch, the initial size of these kernel memory pools will be reduced. At the same time, the amount of kernel resources applications demand will increase. This happens for two reasons: first, some additional kernel resources are required to support additional the additional user space memory, and, second, applications will be able to do more work and accept more connections than before.
For Exchange, setting the /3GB switch means that you will typically exhaust kernel resources before Exchange runs out of user address space.
Which kernel resources are most affected by use of the /3GB switch?
The resources listed here do not only affect Exchange. They are critical and are used to some extent by any application.
System Page Table Entries (PTE's) and Page Frame Numbers (PFN's). These map installed physical RAM to the virtual addresses that "own" the RAM. Adding physical RAM to a computer increases the demand for these resources, as does allocating the majority of a computer's memory to running applications.
Paged pool. Miscellaneous kernel resources are allocated from paged pool. This is called paged pool because this memory can be swapped to the pagefile on disk if necessary. Adding additional workload to the computer generally increases the demand for paged pool memory.
Non-paged pool. The most critical kernel resources are allocated from non-paged pool. This memory is never allowed to be swapped out to the pagefile.
It is possible to manually tune the allocation of these resources. There are tradeoffs to be made if you do this. For example, if you increase available PTE's, this will proportionally reduce paged pool memory.
What happens when kernel memory resources are exhausted?
Symptoms of kernel memory exhaustion include:
Server crashes or cluster failovers
Errors that report complete exhaustion of system page table entries (PTEs) or kernel pool memory
A server may keep running, but may run so slowly that it appears to be completely unresponsive.
1) If a computer or system is working on Monday, something has gone wrong and you just don't know about it. 2) When you don't know what you're doing, document it. 3) Computer errors must be reproduce-able, they should fail the same way each time. 4) First call tech support, then panic. 5) Experience is directly proportional to the number of systems/computers ruined. 6) Always keep a backup of your data, and hope everyone else does the same. 7) To write a program really well, have your wife/husband/Mother/Father test it. 8) If you can't get the system to match the Statement of Work (SOW), redo the SOW. 9) In case you experience doubt, make it sound convincing. 10) Do not believe in miracles--rely on them. 11) The technical term is H.O.S.E.D. (Hardware Or Software Error Detected), use it often and amaze your friends. 12) When it starts working; and hopefully it will, you fixed it in case anyone asks. 13) No troubleshooting experience is a complete failure. At least it can serve as a negative example. 14) Any expensive piece of software will break before any use can be made of it. 15) Team work is essential, it allows you to blame someone else.
Microsoft Exchange Server 2007 Service Pack 3 (SP3) introduces many new features for each server role. This topic discusses the new and improved features that are added when you install Exchange 2007 SP3.
Exchange Server 2007 SP3 supports all Exchange 2007 roles on the Windows Server 2008 R2 operating system.
Exchange 2007 SP3 provides support only for a new installation of Exchange on Windows Server 2008 R2. Exchange 2007 SP3 is not supported in an upgrade scenario on Windows Server 2008 R2. For example, Exchange 2007 SP3 does not support the following installation scenarios:
A new Exchange 2007 SP3 installation on a Windows Server 2008 R2-based computer that has been upgraded from Windows Server 2008
Upgrading Exchange 2007 SP2 to Exchange 2007 SP3 on a Windows Server 2008 R2-based computer that has been upgraded from Windows Server 2008
Upgrading the operating system from Windows Server 2008 to Windows Server 2008 R2 on a computer that has Exchange 2007 SP3 installed
Exchange 2007 SP3 supports the installation of the Exchange 2007 management tools on a computer that is running Windows 7. Additionally, Exchange 2007 SP3 provides support for the installation of the Exchange 2007 Management Tools together with the Exchange Server 2010 Management Tools on the same Windows 7-based computer.
Exchange 2007 SP3 provides support only for a new installation of the Exchange Management Tools on Windows 7. Exchange 2007 SP3 is not supported in an upgrade scenario on Windows 7. For example, Exchange 2007 SP3 does not support the following installation scenarios:
A new Exchange 2007 SP3 installation on a Windows 7-based computer that has been upgraded from Windows Vista
Upgrading Exchange 2007 SP2 to Exchange 2007 SP3 on a Windows 7-based computer that has been upgraded from Windows Vista
Upgrading the operating system from Windows Vista to Windows 7 with Exchange 2007 on a computer that has Exchange 2007 SP3 installed
Exchange 2007 SP3 includes updates to the Exchange Search (MSSearch) component. MSSearch provides support for creating full text indexes for Exchange stores. Exchange 2007 SP3 updates the MSSearch binary files to MSSearch 3.1.
Exchange 2007 SP3 includes support for Right-to-Left text in e-mail message disclaimers in a right-to-left language, such as Arabic. In earlier versions of Exchange, when you use a transport rule to create a disclaimer in a right-to-left language on an Exchange 2007 Hub Transport server, the text appears incorrectly when you view it from Outlook 2007.
Exchange 2007 SP3 adds functionality to the transport rule setting to fully support right-to-left text in disclaimers.
Most us should have faced the subjected problem. Anybody searched for a solution ?
Here is one
A Windows Server 2003-based computer stops responding when you shut down the computer in a remote console session. This problem occurs randomly.
Note You may establish a remote console session by using the Remote Desktop Connection tool (Mstsc.exe) together with the /console switch.
In a regular Windows Server 2003 shutdown process, the operating system has a time-out period during which the service control manager (SCM) shuts down services. If the SCM does not finish shutting down all the services within the time-out period, the operating system continues to shut down without waiting. The time-out period is specified in the WaitToKillServiceTimeout registry entry. The default time-out period lasts 20 seconds.
However, if you shut down a computer in a remote console session, the operating system continues to shut down regardless of the time-out period. In this case, the operating system continues to shut down several seconds after the SCM sends a shutdown notification to the service. Then, the NTFS driver begins to shut down as part of the system shutdown process. The NTFS driver begins to shut down earlier than expected when the service is still shutting down. The service may access disk resources during the shutdown process. A deadlock is likely to occur between the NTFS shutdown operation and the disk resource access.
There is a hotfix available from Microsoft in the following link..
I certainly won't promise you that it solves all of the issues - but I've not seen a hang since I installed the last version of this patch. A version of the hotfix is available for both Windows Server 2003 sp1 and sp2.