Tuesday, July 28, 2009

What is open access?

Bailey, Charles W. Jr., (2006). What is open access? [Electronic version]. This paper appeared in: Jacobs, Neil, ed. Open Access : Key Strategic, Technical and Economic Aspects. Oxford : Chandos Publishing, 2006.



http://www.digital-scholarship.org/cwb/WhatIsOA.pdf

Abstract :

Open access has been defined several times in various ways. The concept is evolving as the technology became more popular to scholars and movements aimed at developing knowledge through unlimited access to scholarly literature for free without financial, legal or technical restrictions. The article encompasses the evolution of Open Access definition based on the historical accounts of existing movements supporting the notion. Self-archiving and Open Access Journals were recommended as strategies to achieve open access to scholarly literature. Whereas, self-archiving strategies covers the Author's Personal Website, Disciplinary Archives, Institutional-Unit Archives and Institutional Repositories. The paper also gave emphasis on the characteristics of Open Access Journal and labeled the following major types of open access journal publishers: born-OA Publishers, Conventional Publishers, Non-Traditional Publishers. The open access movement believes that it can answer the critical questions of scholarly information access and that they are not interested on reforming the existing scholarly information system, but on transforming it so that it can function effectively in the rapidly changing technological environment.


Three (3) things i learned from reading the article :


1. The evolution of Open Access definition from the Budapest Open Access Initiatives, the Bethesda Statement, and Berlin Declaration. The BBB definition collectively implicates the removal of both price and permission barriers to scholarly literature access.

2. There are two(2) complementary strategies to achieve open access to scholarly journal literature coined as the Green and Gold Road to open access : Self-archiving and Open Access Journals

3. The difference of preprints, postprints and e-prints terms in digital publishing. Preprints are draft versions of articles that have not undergone peer-review or editorial review and modification. Postprints are final published versions of the articles. Both digital preprints and postprints are called e-prints.

Reflections :

The foundation of the movements supporting the notion of an open access system lies on the clamor that access to scholarly information is vital to the development of knowledge in the society. There is an ongoing dilemma of people on the access of specialized literature which is further associated by others as an implication of the so-called digital divide. Open access maybe defined in various ways but there exist a collective notion of using it as a tool to address access difficulties of scholars and learning institutions to scholarly literature. It is an attempt to make these literatures available to the public without legal restrictions to facilitate knowledge acquisition. At present, universities and other institutions have acknowledged the importance of open access by simply putting links on there library websites (UP Main Library, Philippine Elib). It has been tagged as the future of digital publishing and the evolution of information. With limited access to online databases due to license restrictions, i found myself benefiting from the technology of open access. I found it useful for my RA's and later for my research activities. It may not be as comprehensive as the subscribed online databases but it could be a good alternative or even a replacement to the former. Some may consider it as a threat to the existing communication system, a revolution and not an evolution to information. With the people craving for free access to information, open access could be a threat and could be an agent to an information revolution.

Friday, July 24, 2009

Developing countries and copyright in the information age

Pistorius, T. Developing countries and copyright in the information age : the functional equivalent implementation of the WCT. Potchefstroom Electronic Law Journal 9(2), p. 1-27, 2006

http://www.puk.ac.za/opencms/export/PUK/html/fakulteite/regte/per/issues/2006_2__Pistorius_art.pdf

Abstract :

The internet is tagged as the "world's biggest copy machine". With this description, the internet and digital technology provides opportunities and pose threats to public and private interests in intellectual property rights. This paper introduces the functions and implementations of existing copyright laws worldwide as applied to developing countries. The WIPO Copyright Treaty (WCT) provides for the protection against the circumvension of technological protection measures applied to works protected by copyright. The paper discusses strong points on the legislative responses to the digital media and on the technology measures on the implementation of the WCT. It provided views on the implementation of the WCT, the impact of technological protection measures and the functional equivalent of WCT on developing countries. It has also provisions on trends on copyright law and also made critical reviews of the laws governing the digital world. The paper concluded with the recommendation that the rights of owner's and users should be functionally equivalent irrespective of the media embodiment and that the balance between private and public rights maybe restored if a functional equivalent approach will be adopted.
Three (3) things i learned from reading the article :
1. Current trends in copyright law have upset the balance between the copyright owner's rights and public interest. Issues arise between the copyright owner's rights to impose technologies to prevent unauthorized use of their works and the legitimate rights of users to access such works.
2. Technological protection measures give authors complete control over the market for their works and sometimes become abusive. The pay per use mechanism limits the access to information which is an argument to copyright protection which should encourage the publication of works and should later enhance the society's level of knowledge.
3. Implementation of WCT and anti-circumvention provisions in developed countries disturbed the copyright balanced and has upset the equilibrium between private and public rights. The trend is harmful to developing countries as net importers of information products.
Reflections : 
The article encompasses an overview of current copyright laws existing worldwide. Copyright laws that are drafted not just to protect the rights of the copyright owners but also the rights of the public to access knowledge. However, with the emerging technological advances in the information world, these laws become agents of protectionism and commercialism. Developed countries made law provisions that will only benefit the owner's rights with the public being deprived of their rights to knowledge acquisition. It is also viewed as an implication of the great divide. It enhances the widening gap between the rich and the poor, the developed and the developing nations. The developed countries being the exporter of the information and the developing countries as net importers. The WCT is conceptualized to balance these rights but in the long end, it became a tool for abusive authorities. Other applications does not provide exceptions to academic and research use of copyright materials. Rights should be balance and not dependent on the media format. It should work well for the author and the public interests.
As an information professional, I believe that knowledge plays an important role to the development of an individual and the society. Depriving the public of access to the right information is by all means hindering their growth as individuals. Copyright laws is playing a crucial role in the information society. The abusive side of these laws will favor the commercial use of information that is making money out of it. Although not as evident, libraries are experiencing this abusive side. Libraries are being pushed to avail of a costly information on a pay per use idea from subcriptions of online databases. Owner's have full control on the materials and could demand payments after the ending of contracts. It is logical but it is often harsh to academic institutions. Earning and learning should be balance, and the only way to attain the equilibrium is to make the copyright laws objective enough to balance the private and public rights.





Tuesday, July 14, 2009

Library 2.0 Theory : Web 2.0 and Its Implications for Libraries

Maness, Jack. Library 2.0 Theory : Web 2.0 and Its Implications for Libraries [Electronic Version]. Webology 3(2), June 2006.
Abstract :
The article introduces the evolving concept of Web 2.0 technologies and the dramatic changes it brought to traditional libraries. It defines Library 2.0 as the application of interactive, collaborative, and multi-media web-based technologies to web-based library services and collections. A technology that is user-centered, rich in content, interactivity, and social activity. Implications of synchronous messaging, streaming media, blogs and wikis, social networks, tagging, RSS feeds and mashups to library access and services are widely explored in the article. Such implications play major roles in the practice of librarianship nowadays and on the evolution of the libraries in the future.
Three(3) things I learned from the article :
1. Library 2.0 is not about searching, but finding. It is about sharing and not on the access. It reflects the notion that people searches and uses information as communities and not as individuals.
2. Library 2.0 is viewed as a revolutionary tool to libraries and information professionals with shifts to open not just catalogs and collection, but also access to their control, with less secured inventory systems and more collaborative discovery systems. Librarians will also enable users to create systems and services for them.
3. The Web will continue its evolution, so as library technologies. Librarians and libraries must continually adapt to such changes to update the profession and the depository of information.
Reflections :
From email to chat, text-based to streaming media and interactive databases, webmasters to blogs and wikis, OPAC to personalized social network interface, these are but a few dramatic changes brought by the introduction of Web 2.0 technologies or the Library 2.0 technologies in particular to libraries and librarians. Although I'm not that familiar with some of the terminologies of these recent innovations, I am quite certain that I have been using some of it as an information professional and as an end-user. It is therefore a must for us librarians to update ourselves of the latest technologies available for the profession to maximize our potentials and be part of the evolution as agents of change. Library 2.0 offers a democratic approach to access and utilization of information across society. It is an attempt to make information easily available for everybody. On the other hand, there is an underlying question of reliability and authority. The lack of peer review and editorship is a challenge to librarians. It is however a given notion to us to understand and be more critical on such sources of information. Library 2.0 is the current technology available at the moment. It is continually changing. We could only hope that such changes will benefit all sectors of the society and would not bring chaos to human beings especially to us information professionals. Embracing the latest innovations will revolutionize the profession and could be a job option for librarians. Ignoring it could mean being stuck to the traditional practices of librarianship and to a less secured future.

Wednesday, July 8, 2009

Computer Security : A Survey of Methods and Systems

Yampolskiy, Roman V. and Govindaraju, Venu. Computer security : a survey of methods and systems [Electronic version]. Journal of Computer Science, 3(7) : 478-486, 2007.
Abstract :
With the continuous evolution of the computer world, computer security is becoming more complicated and new attacks to it are introduced in various forms. This article surveys the different methods and taxonomies in all aspects of computer security such as viruses, attacks, software bugs with special emphasis on Intrusion Detection Systems (IDS) and its evaluation.
The paper introduces various research studies on computer security, Intrusion Detection Systems technologies and products, intrusions, attacks and attackers, flaws and viruses and on the evolution of security tools. Its main purpose is to provide advice and adequate information to security enthusiasts and at the same time would serve as a reference guide for security professionals.
Three(3) things i learned from reading the article :
1. Intrusion Detection Systems (IDS) can be knowledge-based or behavior-based with the first charaterized by matching signatures of well-known attacks and the latter based on user's actions.
2. No IDS is capable of accurately identifying every event occuring on any particular system.
3. The increasing complexity and rapid evolution of modern computer systems prevents obtainment of absolute security.
Reflection :
The latest news on cyber attacks that overwhelmed goverment websites of the US and South Korea mirrors the current dilemma of computer security professionals. It is quite certain that there is no absolute security. Even the biggest economies from where these technologies originated are not spared from such attacks. They are even more vulnerable to security glitches. As computer and network systems infiltrate every aspects of our society, computer security attracts considerable resources from both the research community and from commercial companies. Thus, the use of Intrusion Detection Systems (IDS) and other types of computers security products are being maximized. Although significant to the current dilemma, these systems maybe relative and may become obsolete pertaining to the faster evolution of intrusions and attacks. Attacks may take another form, therefore making IDS incapable of detecting it. We could only hope that IDS could allow a reduction to the number of successful attacks.
With this current situation, I may say that attacks to computer security systems lies on the foundation of human ethical behavior, of computer ethics in particular.

Thursday, July 2, 2009

Web Searching, Search Engines and Information Retrieval

Lewandowski, Dirk. Web searching, search engines and information retrieval [Electronic version]. Information Services and Use, 18(3), 2005.

http://eprints.rclis.org/4620/1/isu_preprint.pdf

Abstract :

The web contains a vast collection of information and the convenient way to navigate it is by using the search engines. This article focuses on studies conducted on Web searching specifically on search engines as information retrieval tools. Factors such as web coverage and content, up-to-dateness of databases, the invisible web and spams are considered challenges to web indexing. It also provides a preview of how users search the web and the process of document ranking by search engines, and later made an overview on how to measure the quality of search engines.

Three(3) things i learned from reading the article :


1. It is interesting to know that that the indexible web contains at least 11.5 billion pages, not including the invisible web and the search engine market is an open market but only shared by a few companies and dominated by just one, Google.

2. According to studies, search engines does not follow an update cycle, thus making the information older aside from the fact that the web do not index a large volume of high quality data on the invisible web making it hard for user's to acquire a much reliable source of information. Free information is often outdated (Ex. Google Scholar)

3. The article looks into a broader view that web searching is still limited considering the large volume of indexed information on the web and that the quality of search engines does not only depend on the number of indexed materials but on the quality of documents it contains and the behaviour of user's towards it.

Upon reading the article, I would agree that authentic and high quality information is still hard to find. I just realized it when it took me so long to find an article that suites my reading assignment. I am quite disappointed after searching the entire Google engine looking for a current and reliable article about information retrieval but I found nothing or some are too technical for a Librarian. The vast search results (hits) made me sick until such time that I resorted to open access databases. The realization is the product of a hands-on experience and the RA played a great role. Search engines as information retrieval tools are fast and convenient but too much reliability would result to a shallow take on the needed information, with questionable authority and an outdated source of information. Although this take is quite pessimistic, I still believe that recent developments on this field would add to answer the more complex problem in information searching. As an information professional dealing with different queries, I would consider quality over the quantity of information.

The future is going digital and I think that search engines will play a vital role on the online digital library project of MEGA. Making it on the web would mean greater visibility and maximizing the company’s earning potential. It is the best way to reach the global market. It is probably the biggest contribution of the web to companies looking for promotion avenues. Going global however would question the accessibility of information, thus giving free information is still far from reality.