Answers to Exam Questions
B. Removing or renaming the table would result in SQL Server not finding the table when the view is executed, causing the execution to fail. Answer A is false because SQL Server checks dependencies every time a view is compiled or accessed. SQL Server does not keep binding information between objects. Changing an underlying column would generate an error, but would not generate the error given in the question, so Answer C is not correct. Users who have appropriate permissions to access a view do not need additional access to the underlying table. Since the error message specifies invalid object names instead of permission violations, Answer D is not correct.
A. The system files, which are accessed less frequently, are on their own logical drive and have plenty of throughput. Creating both the database and transaction log spanning two logical drives produces the maximum throughput of 200MB/sec. Placing the system files, swap file, SQL Server executable files, and transaction log on one logical disk, while placing the database files on the second logical disk only provides 120MB/sec of throughput for the database files, so answer B is incorrect. Answer C is even less appealing because the transaction log and the database files are sharing only 120MB/sec of throughput. The throughput of answer D is the same as answers B and C, because the database files still only have the second controller's throughput of 120MB/sec.
C. Lazy schema validation optimizes performance by ensuring that the query processor does not request metadata for any of the linked tables until data is actually needed for the remote member table. Remote servers are used to enable stored procedure execution on a remote server. Since Answers A and D both suggest remote servers, both are incorrect. To enable partitioning, you have to set up linked servers and lazy validation using the sp_serveroption stored procedure; this makes Answer B false.
C and E. An IDENTITY column takes the following parameters: the first is the seed value, and the second is the increment value. Answer D is incorrect for this reason. In this question, a Student table is created and StudentID will be generated as soon as a new record is inserted into the table. The first record will have a StudentID of 1, and any further additions will increment by 10. An IDENTITY column is not updateable, hence Answer B is not correct. All of the columns, besides StudentID, LastName, and FirstName, are optional, so Answer A cannot be correct. Using autogrow conserves space on the server, allowing the database to grow when it requires more space to hold the data, indexes, and other database-related information. The autoshrink feature causes the database to resize itself when data is removed from the database, and periodically SQL Server will check the database and shrink the database files when possible. The autoclose feature enables SQL Server to shut down the database when all users are disconnected and all resources are free, but does not conserve resources, so Answer A is incorrect. Creating a large database will improve performance, but does not conserve resources, hence Answer B is not correct. Without using autoshrink, a database will retain its large size, making Answer D incorrect. A clustered index orders the data physically providing the sorted order. You need a composite index because you have more than one column in the index. Duplicates are allowed, so a non-unique index is required. None of the other answers meet the customer requirements.
C. The emphasis of this question is to use system disk resources conservatively. The autoclose feature of SQL Server 2000 closes the database and frees its resources when no user connection is accessing the database. It does nothing during a delete operation, and consequently answer A is not a correct answer. Creating a large database is not in keeping with the requirements to conservatively use resources, so answer B is not correct. Both C and D detail creating a database and allowing it to automatically grow. Only answer C, however, addresses the scenario that there will be periodic deletes, and will allow the database to maintain a conservative use of resources.
C. Firstly, the scenario calls for indexing on multiple columns, requiring a composite index. Answer D does not provide for this, and can be excluded. Second, the scenario calls for sorted order indexing, which is provided by a clustered index. Answer B calls for a nonclustered index, and can be excluded. The final requirement is for the allowance of duplicate data. Answer A calls for a unique index, which disallows duplicates, and consequently does not meet the requirements for the scenario.
B. Because a transaction log is a sequential wrap-around write file, fault-tolerance is gained by mirroring the log without striping. RAID 0 is not recommended for any SQL Server files without any other backup place because it provides no redundancy, and since the requirement is to provide full recovery, Answer A cannot be chosen. RAID 5 uses parity, as well as striping, to provide fault tolerance. This is excellent for the performance of read operations because it uses fewer disks than RAID 1, but is not optimal for the transaction log, which mostly performs sequential writes, so Answer C is not correct. RAID 10 uses mirroring and striping, a combination of RAID 0 and RAID 1, and gives the ultimate in performance and recoverability, but it requires a hardware RAID controller, which does not keep down the hardware cost, so Answer D is not correct.
D. Adding the column with a data type declaration, the not null option, a default definition, and a check constraint meets all of the requirements. Answers A and C are incorrect because a default value must be defined if you use the not null option. You can create the column definition and the constraint with the same statement, so Answer B is not true.
A. Transactional replication is used to propagate changes to subscribers as soon as they are made, and changes only occur in the publisher database. There is no Transitional replication, so Answer B is not correct. Merge replication is used when data can be changed at either the publisher or the subscriber. Merge replication would work in the context of the question, but would introduce additional overhead on the servers; that was not part of the requirement, so Answer C is incorrect. Snapshot replication is used for infrequently changed data, where data is allowed to be out-of-date until replication is scheduled, making Answer D incorrect.
A, D, and F. Data can be put on the secondary file group, allowing you to back up only the primary file group (which has the system tables), to ensure that the system databases are always intact. Answer A would also allow a partial restore. File groups can have DBCC CHECKFILEGROUP run on each separate file group, which has the advantage of taking less time than a full DBCC, or when there are few data changes on the secondary file group, which makes Answer D correct. Answer F highlights that in order to have point-of-failure recovery, the primary file group must be intact. If it is made small and protected from the rest of the data, this file group has a better chance of remaining intact. There can be a performance benefit to creating file groups on separate physical drives, but not on the same one, so Answer B is false. Answer C is also incorrect because the primary file group can never be marked as read-only. Answer E, while technically possible, does not necessarily follow from the information given, and so is not correct.