Friday 10 December 2010

Web 2.0/Web 3.0 DITA Assignment #2

Using the Internet and evolving technologies associated with the World Wide Web to publish information in effective and accessible ways
Web 2.0 has allowed regular, non-technical people to fulfill some of the tasks normally completed by computer programmers, web designers, information scientists, or librarians.  Although Tim Berners-Lee asserts that the original intent of the Internet was to embody the web 2.0 characteristics by saying the dream behind the Web is of a common information space in which we communicate by sharing information”; there has been an obvious shift from the web 1.0 presentation technologies to the web 2.0 collaboration technologies. (Berners-Lee)  For example, social networking sites like Blogger, Facebook and Twitter allow users to essentially have their own web pages on the World Wide Web in a matter of minutes complete with text, images, and videos.  Additionally, with tagging and description/caption features, it’s quick and easy to add basic metadata to organize and share your postings.  All of these sites are useful means of publishing information based upon your communication needs.  For example, if you need to get a quick message across or you want to direct people to another place, you can use Twitter to send a short tweet.  If you can say what you need to say with a few extra words, you can use Facebook.  Finally, if you’ve got a lot to say, you can write a blog.  All of these web sites are easily accessible to anybody with internet access whether on a traditional computer, laptop or a mobile device.  In addition to advertising personal lives, Web 2.0 technologies are being used by activists to publish their messages to a wide audience and by techno geeks who want a better experience at a week-long festival.
For some, social networking has become the vehicle for activism.  Deborah Amos of National Public Radio reports, “Young Egyptians are using social media to fight police brutality and urge a more open government… Wael Abbas is one of the leading bloggers in Egypt's social media movement. So when he tweeted the proceedings of a recent trial involving the Web, his words were widely read… Egypt's social media movement is the oldest and largest in the Arab world, with thousands of bloggers online. The movement is a model in a region where young people are the majority. Technology is drastically changing their lives. Smart phones and Internet cafes are widely available. Some 15 million Egyptians are Web users.” (Amos, 2010)  A professor, Sadek, mentioned that young Egyptians, “see the future as bleak... They don't know about the job, marriage, housing — they see torture. They see corruption. They see rigged elections. What can they do? Of course: The only tool in their hands is their fingertips. And the keyboard." (Amos, 2010)  Before social networking, injustices such as these would be spread through word of mouth, street marches and rallies, and/or through print journalism.  Obviously, those methods can be effective but do not have the ability to reach as many people in such a short amount of time as social networking.  Because 2/3 of Egypt’s population is under 30 and is more technologically inclined than the older generations who rule the country, the use of social networking seems as if it could be an effective way of bringing about change for Egypt’s citizens.
Web 2.0’s advanced technology is not only being used to battle social injustice, but to make navigating an event with 40,000 people much easier.  Burning Man, a week-long tech-art festival held in Nevada, United States is an opportunity for techno geeks to learn about new software and technologies.  However, because the festival only lasts a week and is in a different location each year, there has never been the opportunity for commercial enterprises to map it.  However, in 2009, the Burning Man organization assisted with “with the launch of an API. With the API you get access to descriptions and locations of the Streets, Art, Camps and Events. When combined with a map this is everything you need for a local city guide. And that is exactly what the iPhone app does…  It maps all of those entities, will geolocate you and let you mark favorites.” (Forrest, 2009)  This new API allows users to plan their activities at the festival like never before.  You can find out what you want to do and exactly how to get there in the makeshift town.  This is especially innovative as these makeshift towns are recreated in a new place every year.
Identifying appropriate and innovative methods of digital data representation and organization and assessing their potential for use in the information sciences
 I am most familiar with my company’s warehousing services and the responsibilities of the employees who fill those roles.  The contract managers are responsible for keeping track of all the productivity, financial, and training data and much more.  Although much of this data is already stored electronically, most of it is stored in separate locations and may not be readily available.  So, if CEVA created an API to call training information from our Learning Management System (LMS) web site and an API to call information from our Warehouse Management System, then a mashup could be created to clearly organize, present, and track the relationship between training and productivity in each contract.  Additionally, if the mashup also contained financial data organized by month, then it would be easy to track how an associate’s training affects the contract’s productivity and in turn, how a contract’s productivity affects its finances.  The Key Performance Indicators for each contract are stored and reported through an online software system, however, just as with the other systems, that data is kept separate.  Using the API technology, only that specific contract’s data would be called from each system and ‘mashed’ together to create a great source of information for the contract manager.  A contract manager probably wouldn’t call himself an information worker; however, critical business decisions are made based upon the quality of information that is available to him.  Additionally, we have multiple contracts in different locations with some of our customers.  For example, we have three Verizon facilities; Texas, Florida, and New Jersey.  Because we provide the same services in each of those locations, we can reasonably compare data from one contract to the next.  A mashup could be created that plotted each location on a map and had the financial and productivity details available too.  Having this visibility can give the vice presidents information on how to completely manage the business.
One of the dangers of using web services and of cloud computing is that the ground rules are not completely established.  What happens when the content enters the cloud?  Who does it belong to?  What happens when a third-party developer has access to your API?  Savio Rodrigues spoke with Sam Ramji of Sonao Systems.  According to Rodrigues, Sam says, “Without careful consideration, the potential load from a third-party application could disrupt the company’s own, likely business critical, use of the application or service… Sam explained that the explosion of third-party mobile applications is driving interest and use of open APIs. For some companies, this is a double edge sword. Third party use of a company’s APIs increase revenue potential, but also increase risk of core system downtime based on factors beyond the company’s control, whether through misuse or abuse of the open API.” (Rodrigues 2010)  Sonoa Systems provides a service to protect enterprises from misuse or abuse of their APIs, so there are ways around this challenge.
Another innovative idea is to create mobile versions of some of our software applications.  Many of our sales associates are “road warriors”, meaning they spend more than 25% of their time out of the office.  It might be beneficial to show customers a mobile application of our Warehouse Management System or our Transportation Management System.  For example, GM has created a way for their road warriors to be just as productive on the road as they would in the dealership. Willie Jow writes, “a custom app developed by General Motors is a good example of a revenue generator.  As reported in InformationWeek, GM is building an iPhone app for its salespeople that will allow them to close the sale of GM’s new Cruze from anywhere, not just in the dealership.  The app links videos of the automobile to share with a customer and allows the salesperson to search inventory prices.  The app not only eliminates the paperwork associated with buying a car, it can also transform how GM makes a sale and potentially lead to key revenue gains...” (Jow, March 2010)  However, it is important to consider the dangers and conflicts associated with using smart phones for business purposes.  It’s important to consider how to ensure that personal and business information and applications remain separate and how to keep enterprise information secure.  Jow reports that, “Since mobile phones by nature are highly portable, they are relatively easy to steal or lose. And an intruder can quickly gain access to confidential information on an unprotected device or even capture information from unsecured wireless transmissions.  Even though mobile security breaches occur from a variety of causes, the primary challenge for IT departments with mobile devices in the enterprise is consistent: remote management and data protection.” (Jow, November 2010)  Jow continued to write that mobilizing the workforce is a viable business solution as long as the same stringent security measures applied in office buildings are applied to mobile technology.
Utilizing recent advances in information and communications technology to support the successful completion of a wide range of information related tasks with proficiency and efficiency in an online digital environment
Using the classification of web related tasks in the study “A Goal-based Classification of Web Information”, information related tasks fall into three categories: information seeking, information exchange, and information maintenance.  According to Kellar, Watters, and Shepherd, “information seeking tasks consist of Fact Finding, Information Gathering, and Browsing…Information exchange tasks consist of Transactions and Communications…Maintenance tasks generally consist of visits to web pages to ensure that the content appears as it should, that links are working properly, as well as updates to user profiles.”  (Kellar, Watters & Shepherd,  2007)  Until now, I didn’t realize how much of my job required me to use technology to complete information related tasks.   A typical work day could include the following: Wake up in the morning and check my work email and calendar on my android phone.  Upon arrival at work, I’d update my tasks in our team’s collaboration team room on MS SharePoint.  Attend virtual team meetings using video web conferencing software.  Run reports on our SaaS LMS.  Field questions that pop up on MS Communicator.  Prepare physical training records for a contract in town and then use the GPS app on my phone to direct me to the warehouse to make the delivery.  Update my goals and my profile in our company’s public performance management system.  Check the LMS to ensure that links to training materials are functioning.
Because we are a global company, it’s imperative that we have multiple methods of communicating with our other business units.  Though I only supported the North American business unit, I collaborated with the training manager in Turkey.  She and I would collaborate through email, the LMS, and Skype.  Additionally, the online digital environment enabled me to conduct one the toughest training sessions that I’d encountered.  I had to provide simultaneous training classes in 9 different locations in the US covering three different time zones.  We used our corporate office in Jacksonville, FL and my location in Lakeland, FL as the two main spots where live training occurred.  We broadcasted the training sessions via webinar to all of the trainees across the United States.  It wasn’t as ideal as having a live instructor in each location, but we got the job done.  Without an online environment, this would have been impossible.



References and Resources
Amos, D., 2010. Blogging and Tweeting, Egyptians Push for Change. National Public Radio. Viewed 10 December 2010. < http://www.npr.org/templates/story/story.php?storyId=129425721>
Berners-Lee, T. Frequently Asked Questions. Viewed 10 December 2010. Available at < http://www.w3.org/People/Berners-Lee/FAQ.html>
Forrest, B. 2009., Burning Man Gets an API (and a Whole Lot More). Available at < http://radar.oreilly.com/2009/08/burning-man-gets-an-api-and-a.html>
Jow, W., 2010. Protecting Your Mobile Devices and Data. Available at < http://www.itbusinessedge.com/cm/community/features/guestopinions/blog/protecting-your-mobile-devices-and-data/?cs=44450>
Jow, W., 2010. Mobile Apps Mean Business. Available at < http://www.itbusinessedge.com/cm/community/features/guestopinions/blog/mobile-apps-mean-business/?cs=40087>
Kellar, M., Watters, C., & Shepherd, M. 2006. A Goal-based Classification of Web Information Tasks. Proceedings of the American Society for Information Science and Technology, 43(1), 1-22. Retrieved December 10, 2010, from Wiley InterScience Journals.
Rodrigues, S., 2010. Using open APIs for business growth. Available at < http://saviorodrigues.wordpress.com/2010/02/26/using-open-apis-for-business-growth/>

Monday 29 November 2010

Clarity -- APIs, Web Services, Mash-ups, etc.

Richard Butterworth just helped me to understand the whole idea of the web 2.0, the semantic web and stuff that goes along with it.

So an API system has two parts: a complicated (inside) part and a simple (outside) part.  We don't need to understand how the complicated part works so long as it works and gives us the information that we need.  The simple part tells the instructions that we need to get the data out of the inside of the system.  So, the actual API is the instructions that tell us how to get the data we need.  For example, he showed me the art web site that he has that is connected with Facebook.  He got the programming code from FB and added it to his programming code so that he can send a message to a user on how to access a particular work of art.  The instructions in the email is the API. 

The web service is a type of API.  He used the Bridgeman Art website that he uses.  The web service queries the database using urls.  Inside of the url you put the search term that you are looking for along with your username and password (if a subscription service). The data is returned as an XML document. 

The purpose of the APIs and web services is to get the information as XML so that you can then create mashups and mobile applications.  Once you have the data in a machine readable format (XML), what you can do is limitless. 

Today's lecture about open source data is an effort by the UK and US governments to make public data available to create new pieces of information.  For example, you can get data from the government regarding the bus stops in the city.  Then you would use that data to create a mashup with a map of handicap accessible locations to create a new source of information for handicapped citizens. 

The purpose of the RDF language is to make an even less ambiguous language than XML. XML still requires human interpretation in order for the computer to make sense of the information.  The RDF language is designed so that no or minimal human interpretation is required.

I think that one of my major sources of confusion is that I was trying to understand how, where, when and why I or some other non-programmer person would use this information.  The answer is that they wouldn't.  This is mainly for computer programmers.  The mashups are supposed to be designed so that non-technical people can use it but Richard says that some programming skills are necessary to make it fully functional.

Tuesday 16 November 2010

Supplementary Resources -- Trying to Improve

I got my grade back today from my first assignment and it was just okay.  The feedback said that I should use other resources to foster and help demonstrate my understanding.  So, I just finished reading two articles on XML.  I read "XML and the Second-Generation Web" and then I read "What is RDF".  I want to get a better understanding of Web 2.0 technologies.  So, from what I understand, XML is a much more standard and flexible language to tell the World Wide Web what we want it to do.  With html, the www, only recognized the tags like paragraph, heading, etc., but not the data that was in between the tags.  Therefore, html is only used as a presentation language.  It was not intended to be able to manipulate the data between the tags.  Additionally, the tags are very general and therefore make it difficult for pages to link together based on the page's content.  XML provides tags that are more content specific thus allowing the pages and other computers a greater opportunity to show commonalities and understand/recognize the language.  XML can be read by both computers and humans because it consists of regular text. 

Additionally, XML would speed up the browsing experience by allowing web pages to rely on java programs instead of the server to sort and process information.  XML also provides for a standard for metadata.  RDF focuses on the actual data/knowledge itself and provides a way for world wide web to recognize pages that have similar content.  The RDF metadata scheme uses a process kind of like the "join" clause in SQL database queries to alert the web that web pages have commonalities.

Saturday 30 October 2010

Web 1.0 Coursework Assignment

Evaluating and employing appropriate technologies for the digital representation of information
In order to determine the best technologies to employ, one must understand the information itself and the needs and behaviors of the information users.  As a teacher I had several different types of information to communicate with my students and parents.  I knew that students forgot pertinent information by the end of the school day and that it rarely reached parents.  I needed a way to make that information readily available in a quick, efficient and cost effective manner.  I decided to use the World Wide Web and email technologies. So I found a free teacher web site like www.teacherweb.com that had a calendar feature for important dates, email, the ability to upload documents, and a grade book.  The advantage was that it was already designed so I didn’t need html skills, it was available 24/7 to both students and parents, and it took the burden from me as the sole source of class information.  The disadvantage was that it required consistent and constant content management.  Between teaching and coaching, I didn’t always have the time.  As I briefly mentioned in my 4 October blog post at http://tieska-lifeinlondon.blogspot.com/, to design a web site for my artist mother, I took an online html course and learned enough to build www.sherikao.com.  However, I didn’t know anything about information architecture so the site isn’t as effective or user friendly as it could be.  So, while it’s a great opportunity for more exposure, inadequate metadata, slow graphic uploads, and poor navigation makes web traffic very unlikely.
As a trainer, we created an intimidating spreadsheet called a Master Training Plan (MTP).  It was intended just to provide information; what training needs to occur and when.  Training was supposed to be tracked using the online training records database in the Learning Management System (LMS).  Because the LMS was not user friendly, our employees used the spreadsheet as the tracking tool instead of the database.  A large part of the problem with the database was that the reporting feature was incomplete and inadequate for the general user.  From what I’ve learned in DITA, I don’t believe the database was designed properly.  It was probably designed “think[ing] of the tables as a spreadsheet.” (MacFarlane, Butterworth, Krause, 2010)  Additionally, nobody had the SQL skills to query the database to create the proper reports to be used by general users.  From what I learned, this LMS was chosen because it was the cheapest and not because it had the best usability.  So, the information was stored in the database, but was difficult to retrieve.  Morville and Rosenfeld say it well to say “users need to be able to find content before they can use it.” (Morville and Rosenfeld 239)  So, in this respect, the LMS is inefficient.
Morville and Rosenfeld point out that “web sites and intranets are not lifeless, static constructs” and they discuss “the concept of an ‘information ecology’ composed of users, content, and context.” (Morville and Rosenfeld 24)  When designing our company web site and intranet, the web designers must have considered this concept because there is a great difference between our internet site, www.cevalogistics.com, which is designed for customers, and our intranet site, which is designed for employees.  The problem is that retrieving information from our intranet site is not easy.  In my 19 October blog post I said, “It's much easier to do SQL searches if you know how to give the correct command… Information retrieval is much more ambiguous because you don't always know what you need and you don't always know exactly what you're looking for… Additionally, the metadata, if any, that was attached to the information that you're looking for affects your search because if you're not using the right search terms, you may never find it.”  So, while the intranet is a great place to store documents and other important employee information, it would have been much more effective if the site had been designed with the ‘information ecology’ concept.  Just as in the LMS database the information is stored in the intranet, but we can’t find it.  So, the information and the systems are useless.

Managing data with appropriate information technologies in an efficient and professional manner that draws on a critical knowledge of the nature and constraints of digital information
Technology in general and web technology have improved greatly since I created www.sherikao.com in 2006.  There are many web sites that are already designed that we could use to improve my mom’s web presence and possibly become profitable.  For example, many people use social networking sites such as www.facebook.com as a business tool.  It’s easy to upload graphics and videos without needing any web design or html experience.  Because there is no shopping cart or other shopping feature, Facebook would be used as a marketing and awareness tool.  Whereas web sites designed especially for crafters and artists like www.etsy.com make creating a web presence for your art effective, professional, and potentially profitable by providing sales capabilities.  Digital cameras have made photographing and sharing your work extremely easy.  However, a major concern about uploading images to the internet is the possibility of theft.  Although general consensus and common sense say that to prevent your images from being stolen, don’t post them, this is not always practical.  On his web site Greg Cope says, “Of course, image theft can be defined in a number of ways, and its definition - and hence measures (if any) taken to prevent it - will depend upon the individual. There are many ways to protect images from being downloaded, ranging from modifying the image itself (tips 1-3), to preventing webpages downloads (tips 4-8), to being pro-active in finding unauthorized usage of images online (tips 9-10).” (Cope, 2007-2010)
Until CEVA decides to invest in a new LMS, we have to use the one that we have.  In order to make the LMS database function more effective, we have to exploit the features that work well.  If courses are created directly through the system and users register for the courses through the LMS, the completion data (i.e. course date, grade, instructor, etc.) is automatically recorded in the system.  There are a few reports that produce from this data.  One drawback is that this will require a culture shift in order to be effective because the company is in the habit of creating a paper roster and sending an electronic copy to the Training Department and leaving the details to us.
Additionally, CEVA employed the use of MS SharePoint as a collaboration tool for departments and teams within the organization.  However, no information architecture structure was assigned to the system to make information storage and retrieval more effective.  In his blog, Ari Bakker says, “Findability is one of the most important factors in the success of a SharePoint site. If users cannot find what they are looking they will quickly use alternate methods to get results. Employees that cannot find information are less productive and less likely to use the system in general. Likewise users that cannot find information on an internet site will look elsewhere for products and services losing the company revenue.” (Bakker, 2010)  I would suggest suspending the SharePoint site temporarily so that no new content could be added while specific structures are put into place for proper information storage.  Users would be restricted from uploading new content without adding specific metadata to their document.  To prevent users from inserting nonsense where metadata should be, I suggest providing a keyword list specific to each department.  Each department should be responsible for developing the keyword list for documents created in their area.  Users can then draw on these keywords to properly label their documents.  This same concept could be applied to the corporate shared drive and intranet.

References

Morville, P. and Rosenfeld, L., 2007. Information Architecture for the World Wide Web. 3rd ed. O’Reilly Media. Inc.

Cope, G., 2010. Tips and Tricks to Protect Images on the Internet. [online] Available at < http://www.naturefocused.com/articles/image-protection.html> [Accessed 30 October 2010]

McDowell. 2010. DITA Session 2 Review. Tieska McDowell’s Personal Blog [blog] 4 October 2010. Available at < http://tieska-lifeinlondon.blogspot.com/>

McDowell. 2010. DITA Session 4 Catch-Up Information Retrieval. Tieska McDowell’s Personal Blog [blog] 19 October 2010. Available at < http://tieska-lifeinlondon.blogspot.com/>

Bakker. 2010. 10 ways SharePoint 2010 improves findability. SharePoint Config [blog]. 14 April. Available at < http://www.sharepointconfig.com/2010/04/10-ways-sharepoint-2010-improves-findability/>

MacFarlane, A., Butterworth, R., Krause, A., 2010. Lecture 03: Structuring and querying information stored in databases, INM348 Digital Information Technologies and Architectures. [online via internal VLE] City University London. Available at < http://moodle.city.ac.uk/mod/resource/view.php?id=12294>

www.sherikao.com

http://www.teacherweb.com/

http://www.etsy.com/

http://www.facebook.com/

www.cevalogistics.com

Thursday 21 October 2010

DITA Session 4 Catch-up - Information Retrieval

I'm finally getting around to doing my information retrieval blog.

We had to conduct a number of searches using two different search engines: Google and Bing.  In addition to using those two search engines we had to do two different types of searches for each query.  In one search we used natural language queries (i.e. how to, where is, etc.) and in the other we had to use Boolean operators (i.e. NOT, AND, OR).  Then, we had to record our findings in an excel spreadsheet using the first 5 results and calculate the precision (number of relevant documents/number of documents retrieved).  We also had to label the querys by need type: transactional, navigational, information query.

I found that the search engines are better at or more precise at finding transaction queries.  I guess it's because what the user needs to do is more straightforward.  What I learned doing the first search in the activity is that these searches are really ambiguous because if you have limited knowledge of what you're looking for, it's harder to know if you've found it.  I thought that I'd found the correct website, but it turns out that I was way off base.  When I asked Andrew how was I supposed to know when I've found the right website, his response was that you'll know when you've found it.  Then I asked him what happens when you think you've got the right answer, but you're wrong.  He said that that is one of the things that makes information retrieval difficult.

I didn't find a big difference between using natural language queries or boolean operators.

It's much easier to do SQL searches if you know how to give the correct command.  It is much more straightforward because you know exactly what you want the system to produce.  Information retrieval is much more ambiguous because you don't always know what you need and you don't always know exactly what you're looking for.  Also, if you have limited knowledge of the subject of your search, you don't always know whether or not you've found it.  Additionally, the metadata, if any, that was attached to the information that you're looking for affects your search because if you're not using the right search terms, you may never find it.

Tuesday 19 October 2010

Late post for DITA Session 3

In Session 3 we learned about relational databases.  We learned a bit of SQL and how to query databases.  At first glance this was a bit difficult.  The idea of tables and linking them together made sense because I have worked with MS Access before.  I built a database for OACF's membership records.  However, after this brief introduction to SQL and Databases, I'm sure that it could have been built in a much more effective way.  What I learned from the lecture materials is that you can't take the spreadsheet mentality into the database word.  Spreadsheet mentality makes you want to add more and more columns (which in database world represent entities or things) to the end of the spreadsheet if you need to account for more data.  However, in databases, you need to create a new table for each new entity.  That way it's easier to make changes to one table and link it to another table as opposed to changing all the records in one gigantic table.

After I understood that in order to query a database, you need to understand the tables and their attributes, it was fairly easy to get through.  I did have some questions along the way, however, I finished the assignment in class.  The toughest ones where the tables that had to be joined and I was unable to figure out how to join the three tables on my own.  I just went back through and read the lecture notes and tried the exercises again.  It went much faster this time although I did complete tasks 9 and 10 as my attention span started to wane.  In fact, after I finish this blog post, I'm going to take a break.  Anyhow, I've found that it's much more beneficial to my learning and application in the lab if I've read the lecture notes before class!  I will be doing that from now on!

Monday 4 October 2010

DITA Session 2 Review

Today's lecture discussed "The Internet and the World Wide Web". 

A network links computers together.
Types of networks are:
  • LAN - Local Area Network which works within a building
  • WAN - Wide Area Network which workes between different buildings
  • Internet - a vast network of networks
    • examples - telephones, cable, satellite
    • allows computers to connect across the globe and it is a building block for the WWW
    • It's based on a design by the US Military in the 1960s during the Cold War Era
    • Allows access to files remotely

Protoclos
  • telnet - allows access to another PC
  • ftp - allows uploading of files
Domain Name System - DNS Space
  • generic - .com, .org, .gob
  • national - .ac, .uk
Disruptive Technology
The Internet disturbs the way that people work.
  • publishing -- today's society believes that they can find out anything they need to know on the Internet.
  • music -- many people don't see the purpose in going into a brick and mortar store and buying a tangible cd when you can download the album electronically.
  • software development -- there are many open share software sites
Client/Server Model
  • server - detects messages, sends resources
  • client - sends requests, interprets responses
URL - Universal Resource Locator
http://www.city.ac.uk/cs/conditions/conditionsofuse.html

http:// (protocol)
http://www.city.ac.uk/ (dns name of server)
/cs/conditions/conditionsofuse.html (local path in relation to server folder)

  • browser acts as client
  • sends a request to computer with specified address
  • asks for a particular document
  • server constantly running http "daemon"
    • program that awaits a program to connect to it
    • processes requests and sends digital document to browser
  • client interprets and displays it
  • server may respond to thousands of requests per second
So today I created several web pages and added links and images.  It was pretty simple for me because I'd learned html a few years ago.  In 2006 I created a web page for the art business that my mom and I have.  I took an online beginning html course and learned the basics.  The web address is http://www.sherikao.com/.  It's not very good, but I was proud at the time.

We published our pages to the school's server so that we can access them on the web.