Les dejo algunos tips para optimizar la performance. Podrán extender la lectura en las fuentes de los posts:
Tip 1: Separate user and database traffic
But SharePoint places a tremendous amount of demand on SQL -- each request for a page can result in numerous calls to the database, not to mention service jobs, search indexing and other operations.
Tip 2: Isolate search indexing
In order to prevent search and user traffic from conflicting, an additional server may be added to the farm, which is dedicated solely to servicing search queries (in smaller environments, the index server may also serve this function). The farm administrator would then configure the search service to perform crawls only against this dedicated server. This configuration may reduce traffic to the Web front-end servers by as much as 70% during index operations.
Tip 3: Adjust SQL parameters
One quick way to avoid future headaches is to provision the major SharePoint databases on separate physical disks (or LUNs if a storage-area network is involved). This means one set of disks for search databases, one for temporary databases and still another for content databases. Additional consideration should be given to isolating the log files (*.ldf).
Another technique is to proactively manage the size and growth of individual databases. By default, SQL grows database files in small increments, either 1MB at a time or as a fixed percentage of database size (usually 10%). These settings can cause SQL to waste cycles constantly expanding databases, and prevents further data from being written while the databases are expanding. An alternative approach is to pre-size the databases up to the maximum recommended size (100GB) if space is available and set auto growth to a fixed size (e.g. 10MB or 20MB).
Tip 4: Defragment database indexes
SQL Server maintains its own set of indexes for data stored in various databases in order to improve query efficiency and read operations. Just as with files stored on disk, these indexes can become fragmented. It is important to plan for regular maintenance operations, which includes index defragmentation
Tip 5: Distribute user data across multiple content databases
Site collection with thousands of subsites is storing the bulk of the user data from every list in every site in a single table in SQL.
This can lead to delays as SQL must recursively execute queries over one potentially very large dataset. One way to reduce the workload is to manage the mapping of site collections to content databases.
Tip 6: Minimize page size
Tip 7: Configure IIS compression
SharePoint content consists of two primary sources -- static files resident in the SharePoint root directories (C:\Program Files\Common Files\Microsoft Shared\12 for 2007 and \14 for 2010) and dynamic data stored in the content. At runtime, SharePoint merges the page contents from both sources then transmits them inside an HTTP response to the requesting user. Internet Information Server (IIS) versions 6 and 7 both contain various mechanisms for reducing the payload of HTTP responses prior to transmitting them across the network. Adjusting these settings can reduce the size of the data transmitted to the client, resulting in shorter load times and faster page rendering.
Tip 8: Take advantage of caching
Much of the content requested by users can be cached in memory, including list items, documents, query results and Web parts.the SharePoint Object Cache can significantly improve the execution time for resource-intensive components, such as the Content Query Web Part. For example, large objects that are requested frequently, such as images and files, can also be cached on disk for each Web application to improve page delivery times.
Tip 9: Manage page customizations
When customization occurs with Sharepoint Designer, the entire page content, including the markup and inline code, is stored in the database and must be retrieved each time the page is requested. This introduces relatively little additional overhead on a page-by-page basis, but in larger environments with hundreds or even thousands of pages, all that back-and-forth to the database can add up to significant performance degradation.
Fuentes:
http://www.networkworld.com/news/tech/2010/052410-tech-update.html
No hay comentarios:
Publicar un comentario