Now that we have a policy, it’s time to review our existing websites within the scope of our objectives. This will allow us to identify issues to be fixed in existing websites and avoided in future releases, creating a baseline for future work and identifying where training or additional expertise is needed. The reviews are likely to highlight instances of good practice, which identifies staff or suppliers who have critical skills to build on and to celebrate. Results are also helpful in stakeholder reporting activities. We can highlight existing problems and their impact so that later on, the impact of positive change can be put into context. Universities IT departments are likely to support hundreds of services, or differing complexity and technology stacks, and probably a mix of commercial off the shelf and internally developed services as well. So, first we should prioritise and group the services to be reviewed, looking for the greatest opportunities for learning and benefiting users. This will vary between institutions, but you might be looking at usage, priority, whether VPATs are available, that kind of thing. Then we should consider how we will perform the testing. I know many universities employed agencies to perform accessibility testing on their corporate sites, but it's unlikely that this approach is scalable to the large number of other services and a unless approached with care using an agency is less likely to result in sustained benefits through building knowledge and awareness of accessibility internally. However, we also have to consider the risk of false positives or incorrect fixes applied through limited knowledge of assistive tools such as screen readers, as Abi James, the Digital Accessibility Consultant at Barclays points out. We will come back to implementing accessibility throughout the development lifecycle later in the presentation. Instead, we’re going to share an approach we have started using at the University of Southampton. Within the Digital Learning team at the University of Southampton, we’re building an accessibility testing process. This is primarily to check the accessibility of unsupported tools that have been used well colleage and to review accessibility of supported or new sites and services. It’s important that this is scalable so that we can build the team’s capacity. So, our process grows along with our familiarity and confidence with accessibility testing. It’s broken down into four levels, the first level was inspired by the results of a Freedom of Information request to the Central Data and Digital Office (CDDO). Through this we discovered that the process that they perform when viewing websites for compliance is that they use automated testing tools, correlate and check issues it found manually, and then do a keyboard navigation and browser zoom check. Only if serious issues are found at this stage is a more detailed audit likely. So our internal accessibility testing service is based around these 4 levels. The first is the baseline test mimicking what the CDDO do. We are building competency within our team to increase the level of testing as we learn more and become more proficient. The second level of testing is based around the “W3C Easy checks” which are designed to be quick and easy, rather than definitive. Alistair McNaught and Abi James did some work to document similar tests that can be completed with limited knowledge and tools. Links to these are shared on our presentation support site. As our understanding grows, we are we are building up our level 3 testing process, a link to which is on our presentation support site. The aim is to reach the level of competence that we can follow the Website Accessibility Conformance Evaluation Methodology to perform a full audit. Based on our prioritisation of services, and building our level of competence in testing, we are in a position to start creating Accessibility Statements. Even with a so called Level 1 test we can start this. While accessibility statements should have minimum set of information, there is the opportunity to add more information, and to make it relevant and understandable by users. Promoting and communicating our statements can help to reinforce awareness, demonstrate that we care about removing barriers to the use of services, and encourage reporting of accessibility issues, helping us identify new ways to improve our services and remove accessibility barriers. One final thought about accessibility testing. While automated testing is not sufficient, the command line tools such as Axe or Pa11y can be used at scale to identify issues across sites. It may help to identify issues that can be remediated just by adjusting CSS in the case of colour contrast, or adjusting the headers in the case where the viewport has been set to disable text scaling and zooming. Identifying systematic fixes can help create momentum and a positive way to communicate progress to secure greater commitment. This can also be used to learn improvements for implementing new services.