Open Source Conference Albania, OSCAL 2015

[alert type=”info” title=””]This is a blog post written originally by Redon Skikuli on his blog and has been aggregated with the author’s permission. [/alert]


OSCAL (Open Source Conference Albania) is the first international conference in Albania organized by Open Labs to promote software freedom, open source software, free culture and open knowledge, concepts that originally started more than 25 years ago.

The second edition of the conference will take place at 9 & 10 May 2015 in Tirana (Godina Liria) and will gather free libre open source technology users, developers, academics, governmental agencies and people who share the idea that software should be free and open for the local community and governments to develop and customize to its needs; that knowledge is a communal property and free and open to everyone.

Im exited, proud and lucky to be part of the organizing team of the second edition of the event, working with a great group of Albanian FLOSS enthusiasts that know how to create qualitative projects in a decentralized way. This edition is organized in the most decentralized way of working possible in the decision making process and the software used to document and plan activities and tasks. These tools include, but are not limited to Etherpads, Telegram for chat and WordPress for the maintenance of the website. Unfortunately in some cases we also used some proprietary cloud services, but we are planing to change this in the next edition.

Working and taking decision in a decentralized way is not only amazing, but also the the key theme of my talk during the first day and is also the main message we want to share with the participants during OSCAL 2015.

Here is the list with some of the inspirational speakers for this year, the agenda, the blog section with all the latest news, a humble guide to Tirana for our friends from abroad, some banners in case you dig the whole thing and want to spread the #OSCAL2015 vibe and the mobile app, your companion during the event. There will also be competitions, side events related to Open Street Map, LibreOffice, Mozilla and Wikipedia and a massive after-party.

Participation is free of charge, but online registration is required.

Looking forward for the result of months of hard work from all the team and the amazing volunteers in the second weekend of May 2015!

Across the Atlantic: Journalism++ opens its first chapter outside of Europe

Bildschirmfoto 2015-02-27 um 16.55.22

Journalism++, the data journalism agency, opens its first chapter outside of Europe: Jornalismo++ São Paulo, also the first data journalism agency in Brazil. The Brazilian office will strenghthen current data journalism teams and lead projects of data-storytelling for news media organisations in the region, adding up to J++’s portfolio of award winning projects such as Datawrapper, and Broken Promises.

Brazilian newsrooms are catching up to the data journalism revolution, although most of them still don’t have the resources to hire professionals from different backgrounds, such as Computer and Data Science, Design and Social Network Analysis, to lead data-driven investigations. Jornalismo++ São Paulo is an effort to fill this gap with a handpicked team of experts with an extensive experience in major Brazilian newsrooms and data journalism projects. “We want to bring data journalism to Brazil, helping newsrooms that want to do good journalism with data, but don’t have the manpower to do it in the short term”, says Marco Túlio Pires, journalist and programmer, one of the founders of the chapter in São Paulo.

Besides Marco Túlio Pires, who also coordinates School of Data Brazil, the team in São Paulo is lead by four other professionals: Juan Torres, editor of city’s desk at the Correio newspaper, the biggest in Salvador; Natália Mazotte, teacher assistant at the Knight Center for Journalism in the Americas and also School of Data Brazil coordinator; Tiago Mali, Training Director at Brazil’s Association for Investigative Journalism; and Thomaz Rezende, who worked as a programmer and designer for VEJA Magazine.

The name of the agency is a pun between a common operator in programming languages and journalism itself. “The operator ‘++’ means ‘plus one’ to a certain numeric variable. In other words, we want Jornalismo++ to go beyond traditional journalism, even beyond what’s already on the web. In our work, we increment journalism with skills from other areas, such as Computer Science, Design and Data Analysis”, explains Natália.

Jornalismo++ São Paulo will also maintain a blog about Data Journalism with the latest updates in the field for a Portuguese-speaking audience. For more information about J++ São Paulo visit their website:

Open Data in the Philippines: Best practices from disaster relief and transportation mapping

While attending Geeks on a Beach last month, we also spent some time in Manila to visit a few labs and agencies, and had several discussions on the state of open-data in the Philippines.

Quick reminder: open-data is a recent trend for government, companies and institutions to release their datasets freely, so that users, developers, citizens or consumers can make use of it and create new services (check FixmyStreet for a “citizen 2.0″ stint or FlyonTime for a more commercial approach).

The Philippines, a 100m population country we have been exploring, hosts quite a few very good applications of open-data, and they also have a strong support from the government side to do so. Here’s some of their creations, with the explanations of Ivory Ong, Outreach Lead of Open Data for the Department of Budget and Management of the government of the Philippines.

Key milestones of the open-data in the Philippines

The major milestone for the open-data in the Philippines was the official launch of in January 16, 2014 after a 6-month development period. “We have had 500,000 page views as of June this year. We published 650 datasets at the time and had infographics (static data visualizations and interactive dashboards) already which was the unique selling point of our data portal. We were able to push out an additional 150 datasets by May 5, 2014″, says Ivory.


The team also lead two government-organized hackathons: #KabantayNgBayan (on budget transparency) and Readysaster (on disaster preparedness) to build awareness on the use and benefits of open government data. Another milestone is having a Data Skills Training for civil society organizations, media, and government to build capacity with data scraping, cleaning, and visualizing.


“Back in June, we likewise conducted our first Open Data training at a city level (Butuan City, Agusan del Norte) where local civil society organizations and local government units created offline visualizations from data disclosed by the Department of Interior and Local Government (DILG) via the Full Disclosure Policy Portal“, she adds.

Mapping the transports in Metro Manila with students embedded on all routes

While talking with Ivory and Levi Tan Ong, one of the co-founders of By Implication, a digital agency, I’ve heard about a quite funny story.

Just as in so many emerging markets, the transportation system is organically grown. Except for the MRT or subway systems where an official map helps to navigate the city, most routes by local bus (dubbed jeepneys in the Philippines, but one can think of Nairobi’s matutu as well) are unwritten. People just know them, stations are all over the road and nowhere at the same time.


So the Department of Transport launched two initiatives to solve the issue. First, by putting students with GPS plotting software in all the jeepneys and local buses to map their actual routes, and then by releasing the data to have the communities of developers build an app for that. “From the little that I know, this was done because Department of Transport and Communication and its attached agencies have clashing statistics on the exact number of routes”, adds Ivory.


Creative agency By Implication then won the Open Community Award at the Philippines Transit App Challenge, with, an app which helps you to know which combination of transportation to use to go from A to B… quite convenient for the foreigner I am in the gigantic Metro Manila area! The app is recording about 50 000 requests per month since inception, and if there’s still some glitches on the data, it’s the first real online map and direction service for Manila.

Where is the Foreign Aid for disaster going? Open Reconstruction will tell

The same agency is also behind Open Reconstruction, an open-data platform which tracks where theaid money after typhoon Yolanda hit the archipelago in November 2013.



It’s not just a storytelling of where funds are allocated, as Levi says: “Several towns asked for money to rebuild infrastructure and housing, but at that time, it was a long process in 5 steps at least to get funding, and all was in paper. So what we provide is a digitalisation of the aid process. First, by streamlining the process of applying for money and making all steps digital, traceable, and in a second step, by releasing this data to the public to increase transparency of the overall aid effort”.



The connection between the agency’s work and the government open-data team seems to work on the topic of foreign aid. Ivory adds that “Context at the time was that there were a lot of news releases saying that humanitarian aid was coming in specifically for Yolanda. There were assumptions that government agencies might be getting funds yet are not using it for its intended purpose. When we finally launched the site and finished the scoping of the information-goods-cash flow [see infographic from the FAITH site below], we found out that only a small portion went to government anda vast majority went to multilateral agencies such as the UN and the Philippine Red Cross. Public demand died down because of it”.


Open Reconstruction is the other half of what the open-data team wanted FAiTH data to be connected to: how the money was spent and if it was used for the intended purpose. It gives anyone, by bringing data to light, a chance to be a watchdog to hold government to account.

What’s next for open-data in the Philippines? Training, training, training

In just a few months, the open-data community did hit quite a few convincing milestones, both with government support and the involvement of the community of developers. There’s still a lot to do, as Ivory tells us, because as in any digitalisation, training, change management and making sure the administration and the public understand and accept this new policy is key.

“I guess this goes back to our first time to run the training to create offline data visualizations back in June. Local government unit representatives who were intimately familiar with local budget data had an easier time to create visualizations and explain it. After the crash course training for free online tools they can use, we went into a workshop proper where they select PDF files from the Full Disclosure Policy Portal (based on the city/municipality they lived in) and proceeded to discuss with their groupmates on how best to visualize it using colored paper and pentel pens.

These actors at a local level are important since they serve as potential information intermediaries who can communicate data into digestible stories that citizens can relate to based on their needs. Citizens who reside in remote or rural areas and are not familiar with government jargon/processes can be informed and empowered if intermediaries exist.

From our initial experience, I think I can propose 4 important must-have skills for intermediaries:

  • technological capacity (i.e. use of ICT) to clean/structure/visualize data
  • good understanding of government vocabulary and process (for data analysis and interpretation)
  • deep knowledge of local / community needs and priorities
  • communication skills, particularly storytelling with data

The last skill is important because stories are easier to understand versus listening to technical jargon. Filipinos are very much into knowing hat’s what in the lives of family, friends, celebrities, and politicians. Stories trump statistics in this case so learning how to narrate what dataset/s mean can be more useful. If Open Data is to make an impact in the lives of citizens, it must be in a language that is relatable and understandable”

Written by Martin Pasquier from Innovation Is Everywhere

7 Predictions for “Open Data” in 2015

What’s going to happen to the “open data” movement in 2015?  Here are Dennis D. McDonald‘s predictions:7predictionsOD2015

  1. Some high profile open data web sites are going to die. At some sites the lack of updates and lack of use will catch up with them.  Others will see highly publicized discussions of errors and omissions.  For some in the industry this will be black eye.  For others it will be an “I told you so” moment causing great soul-searching and a re-emphasis on the need for effective program planning.
  2. Greater attention paid to cost, governance, and sustainability. In parallel with the above there will be more attention paid to open data costs, governance, and program sustainability.  Partly this will be in response to the issues raised in (1) and partly because the “movement” is maturing.  As people move beyond the low-hanging-fruit and cherry-picking stage they will be giving more thought to what it takes to manage an open data program effectively.
  3. Greater emphasis on standards, open source, and APIs. This is another aspect of the natural evolution of the movement. Much of the open data movement has relied on “bottom up” innovation and the enthusiasm of a developer community accustomed to operating on the periphery of the tech establishment. Some of this is generational as younger developers move into positions of authority. Some is due to the ease with which data and tools can be obtained and combined by individuals and groups working remotely and collaborating via systems like GitHub.
  4. More focus on economic impacts of open data in developed and developing countries alike. While many open data programs have been justified on the basis of laudable goals such as “transparency” and “civic engagement,” sponsors will inevitably ask questions about “impact” as update costs begin to roll in.  Some of the most important questions are also the simplest to ask but the hardest to answer, such as, “Are the people we hoped would use the data actually using the data?” and “Is using the data doing any good?”
  5. More blurring of the distinctions between public sector and private sector data. One of the basic ideas behind making government data “open” is to allow the public and entrepreneurs to use and combine public data with other data in new and useful ways. It is inevitable that private sector data will come into the mix. When public and private data are combined some interesting intellectual property, ownership, and pricing questions will be raised. Managers must be ready to address questions such as, “Why should I have to pay for a product that contains data I paid to collect via my tax dollars?”
  6. Inclusion of open data features in mainstream ERP, database, middleware, and CRM products. Just as vendors have incorporated social networking and collaboration features with older products, so too will open data features be added to mainstream enterprise products to enable access via file downloads, visualization, and documented APIs. Such features will be justified by the extra utility and engagement they support. Some vendors will incorporate monetization features to make it easier to track and charge for data the new tools expose.
  7. Continued challenges to open data ROI and impact measurement. As those experienced with usage metrics will tell you it’s not just usage that’s important it’s the impact of usage that really counts. In the coming year this focus on open data impact measurement will continue to grow. I take that as a good sign.  I also predict that open data impact measurement will continue to be a challenge.  Just as in the web site world it’s easier to measure pageviews than measure the impacts of the information communicated via the pageviews, so too will it continue to be easier to measure data file downloads and API calls than the impacts the use of the data thus obtained will have.

By Dennis D. McDonald, Ph.D.

Giving research data the credit it’s due

In many ways, the currency of the scientific world is publications. Published articles are seen as proof – often by colleagues and future employers – of the quality, relevance and impact of a researcher’s work. Scientists read papers to familiarize themselves with new results and techniques, and then they cite those papers in their own publications, increasing the recognition and spread of the most useful articles. However, while there is undoubtedly a role for publishing a nicely-packaged, (hopefully) well-written interpretation of one’s work, are publications really the most valuable product that we as scientists have to offer one another?

As biology moves more and more towards large-scale, high-throughput techniques – think all of the ‘omics – an increasingly large proportion of researchers’ time and effort is spent generating, processing and analyzing datasets. In genomics, large sequencing consortia like the Human Genome Project or ENCODE  were funded in part to generate public resources that could serve as roadmaps to guide future scientists. However, in smaller labs, all too often after a particular set of questions is answered, large datasets end up languishing on a dusty server somewhere. Even for projects whose express purpose is to create a resource for the community, the process of curating, annotating and making data available is a time-consuming and often thankless task.


Current genomics data repositories like GEO and ArrayExpress serve an important role in making datasets available to the public, but they typically contain data that is already described in a published article; citing the dataset is typically secondary to citing the paper. If more, easier-to-use platforms existed for publishing datasets themselves, alongside methods to quantify the use and impact of these datasets, it might help drive a shift away from the mindset of ascribing value purely to journal articles towards a more holistic approach where the actual products of research projects – including datasets as well as code or software tools used to analyse them, in addition to articles – are valued. Such a shift could bring benefits to all levels of biological research, from ensuring that students who toiled for years to produce a dataset get adequate credit for their work, to encouraging greater sharing and reuse of data that might not have made it into a paper but still has the potential to yield scientific insights.

Tools and platforms to do just this are gradually emerging and gaining recognition in the biological community. Figshare is a particularly promising platform that allows for the sharing and discovery of many types of research outputs, including datasets as well as papers, posters and various media formats. Importantly, items uploaded to Figshare are assigned a Digital Object Identifier (DOI), which provides a unique and persistent link to each item and allows it to be easily cited. This is analogous to the treatment of articles on preprint servers such as arXiv and bioRxiv, whose use is also growing in biological disciplines; however, Figshare is more flexible in terms of the types of research output it accepts. In addition to the space and ability to share and cite data, the research community could benefit from better quantification of data citation and impact. Building on the altmetrics movement, which attempts to provide alternative measures of the impact of scientific articles besides the traditional journal impact factor, a new Data-Level Metrics pilot project has recently been announced as a collaboration between PLOS, the California Digital Library and DataONE. The goal of this project is to create a new set of metrics that quantify usage and impact of shared datasets.

Although slow at times, the biological research community is gradually adapting to the new needs and possibilities that come along with high-throughput datasets. Particularly in the field of genomics, I hope that researchers will continue to push for and embrace innovative ways of sharing their data. If data citation becomes the new standard, it could facilitate collaboration and reproducibility while helping to diversify the range of outputs that scientists consider valuable. Hopefully, the combination of easy-to-use platforms and metrics that capture the impact of non-traditional research outputs will provide incentives to researchers to make their data available and encourage the continued growth of sharing, recognizing and citing biological datasets.

WikiAkademia and AdaCamp in Berlin!

I’ve been to a number of open source and technical conferences over the last few years, most of which I’ve thoroughly enjoyed. But AdaCamp is a special kind of experience.

AdaCamp is a conference dedicated to increasing women’s participation in open technology and culture: open source software, Wikipedia and other wiki-related projects, open knowledge and education, creative fan culture, remix culture, and more. AdaCamp brings women together over two days to build community, share skills, discuss problems with open tech/culture communities that affect women, and find ways to address them.

Adacamp gave me the ability to see how a major conference’s code of conduct was deeply flawed and the confidence to approach them with suggestions for how to fix it.

It’s encouraged me to speak frankly about diversity in our communities and how to improve it.

It’s helped me to meet so many incredible women, to share experience and to learn a lot.

I finally met others Wikipedians from all over the world. I have a year that I am contributing for Wikipedia and I had never met anyone in person. That motivates me a lot and made me feel proud of my work with WikiAcademy Albania. I’ve created contacts that will lead to exciting and future workshops/events at our hacker space Open Labs.

One of the best things about AdaCamp was learning about imposter syndrome. That session was empowering. The belief that one’s work is inferior and one’s achievements and recognition are fraudulent — in open technology and culture endeavors where public scrutiny of their work is routine.

Workshop about clean code was so useful thanks to Franzi.The compliments corner was funny and inspiring as well. The discussion about femnisem, women in open culture, non-open culture, code, education, social events and everything else in there, made Adacamp the perfect place to be those two days.

Now I know that I want to reach out to other women that identify as “geek”, “feminist” or both. I realized that I was among not only amazingly smart women, but also very generous people.

If you’ve never been to a feminist conference, you’re missing out a lot.

If you’ve never found yourself surrounded by dozens of brilliant, empathetic, creative and determined women, you should consider giving it a try. If you’ve never gone from learning about how open source cloud computing platforms work straight to a discussion of microaggressions and how to deal with them, finishing things off by sharing your favorite feminist response gifs – well, maybe you should go to AdaCamp.
Writen By: Greta Doçi
All photos and posts are CC-BY SA

The value of sharing your know-how openly

In June 2010 I graduated from the University of Sheffield with a Doctor of Philosophy Degree in Electronic Engineering and quickly embarked up on the typical academic career trajectory: I participated in conferences in the US and Asia, and took part in the race to publish papers in the best regarded academic journals in my field. Over time I achieved a respectable standing amongst my peers but I could not shake the feeling that there was more to be done to propel my career and give it a stronger aim.


How I discovered academic papers are not the only solution to progress my career

Somewhere in the fall of 2012 I attended a workshop aimed at helping scientists to promote themselves called: “making the most of your Postdoc”. Amongst the various advices offered to us, one particularly stuck with me: “raise your profile by creating a profile”. The person leading the workshop gave the example of a fellow researcher who had created an “about me” profile page that stated his area of interest and listed some useful information such as past publications, presentations and grants he obtained.

A few days later I was wondering how I could best advertise some of my non-peer review and equally important practical knowhow such as “troubleshooting problems in order to keep my research equipment operational” or “knowing what every single wire does inside that hardware rack”. Actually I had acquired a vast amount of non-peer reviewed knowledge in order to successfully create my peer-reviewed output. The act of designing, building, re-designing, fixing and improving things had become so blasé I hardly noticed how impressive it was to the outsider until I started trying to explain what I was doing to my first PhD students. By the time my third PhD student had arrived, and I was explaining the same concepts and ideas, I realised my knowledge could well be extremely useful to others as well. And there it struck me: why not create a blog to share all those bits of knowledge with those who might find them useful? This ”Eureka!” moment led to the inception of my blog, which was inaugurated in November 2012 with my first series of knowhow posts.

Blogging allowed me to reach a whole new level of recognition among my peers

When I started had of course expected some interest from my fellow colleagues and PhD students. However, the positive reaction was truly a surprise to me: my visitors climbed steadily over the first few months and by mid-2013 I was getting 400 unique visitors per month. I also started to get comments on my posts as well as questions from other researchers from academia and industry.  I answered those questions dutifully and wrote a new series of articles to cover the missing content. It was not long before the first consulting requests reached my mailbox. It occurred to me that my knowledge was not only useful to others, that usefulness gave it an inherent value.

Now, not only am I able to direct a regular income from my consultation services but I am on the way of doubling my previous income as an academic researcher with consulting alone. Additionally, working with industry provides me with a pleasant break from my closeted research existence and the opportunity to meet many interesting new people who recognize me for my expertise.

Faebian Bastiman Ambassador Eva Constantaras helps Kenyan journalists explain lag in development in data driven way

For four days, 14 journalists from North Rift Kenya came together for a data journalism workshop led by Internews data journalism trainers Eva Constantaras and Dorothy Otieno. They investigated how to produce more in-depth stories on health and education topics through data journalism.

We were happy to contribute free Pro accounts to participants who went on to produce their own data visualisations following the training. The distance from Kenya’s open data community in Nairobi makes it challenging for journalists from rural areas to produce data stories after such workshops because of access to data, trainers and editors, who provide the training, mentoring and support needed to produce data journalism.

Here are some of the outcomes:

Michael W. Odhiambo (@mowesonga), a correspondent for the Standard, explored data on education levels in his county and the constituencies within it. He found that one constituency was pulling up the average for the entire county and the rest lagged behind. He is currently looking into whether the public budget for maternal healthcare is adequate for the treatment of complications such as fistulas.

Caleb Kemboi (@drkemboi) a journalist with Thomson Reuters explored a dataset that measured basic literacy and numeracy skills among school-age children. His visualisation focuses on the lowest performing constituencies in the region. His current investigations seeks to identify the reasons behind the school drop out rate among girls in the North Rift Region.

Joshua Cheloti (@CeejayCheloti) is a radio journalist with Biblia Husema Broadcasting, a radio station in Eldoret. His visualisation identifies a correlation between hostile environments and poor performance on numeracy and literacy exams.

Cheloti is committed to human interest reporting.  He said, “Before the training I feared stories involving data, but now I enjoy such stories as I can professionally analyse the data and use it to come up with a radio story that can easily be understood by my audience.” His next investigation looks at rising cases of chronic diseases in North Rift and the funds and facilities to treat them.

Each of the 14 participants will develop one data-driven story over the next month with a special focus on simplifying numbers, understanding the source of the data and putting breaking health and education stories into context using data.  They will also participate in a follow up training to see what questions their own investigations raised and which data they can use to develop health and education beat reporting.

About Eva:

Eva Constantaras (@evaconstantaras) is the Internews Data Journalism Advisor and specialises in cross-border journalism projects to combat corruption and encourage transparency. She has managed projects and reported from across Latin America, Asia and East Africa on topics ranging from displacement and kidnapping by organised crime networks to extractive industries and election violence.  Her reporting has appeared in media outlets including El Mundo and El Confidencial in Spain and the Seattle Times and El Tiempo in the Americas. Ambassador Program: Ambassadors bring the power of data visualisation to journalists, activists, communication officers, university students and classroom teachers all over the world. The network is designed to enhance data literacy while also encourage the knowledge sharing between the program members. Interested to join? Ask [email protected] for more details.

Software Freedom Day 2014 celebrated in Nepal

Software Freedom Day (SFD) is a worldwide celebration of Free and Open Source Software (FOSS). In Nepal, FOSS Nepal Community has been regularly organizing and celebrating Software freedom day since 2005. From 2007 to 2009, for three consecutive years, FOSS Nepal’s SFD celebration was recognized as the best event in the world. This year also with an aim to enrich the open communities rather than outreach upon the Free and Open Source movement, FOSS Nepal Community organized Software Freedom Day 2014, Kathmandu at Trade Tower (Elite Hall), Thapthali, Kathmandu, Nepal on the date of 20th September 2014. The way of celebration was completely different from the past year software freedom day. This year it was an half day event, where more than 14 different open communities who are currently working under open philosophy gather at the same place and the schedule of the event was also so simple.

Here goes the schedule of an Event.

Communities Like PHP Developers Nepal, Google Developer Group (GDG) Kathmandu, Robotic Association of Nepal (RAN), Wikipedia (Nepali), Mozilla Nepal, Open Street Map (OSM) Nepal, Open Knowledge Nepal, Ruby Developers Nepal, Google Business Group (GBG) Kathmandu, WordPress Nepal, Chitwanix, Linux Terminal Server Project (LTSP) Nepal and many more was playing an celebrating partners role for this year Software Freedom Day. Event was Supported by Nepal Government Department of Information Technology, Rooster Logic and Computer Association of Nepal (CAN).

As Per the schedule event was officially started by the host Ms. Shristi Baral and Mr. Rajan Kandel, not exactly in the same time according to schedule but 15 minute late because of some management issues. After a short introduction of Free and Open Source Software (FOSS) Nepal Community and Software Freedom Day 2014, they invited Mr. Sagar Chhetri, first presenter of the event, who was there to give presentation on Chitwanix OS.

Mr. Sagar Chhetri utilized his total 15 minutes by defining what is Chitwanix OS, Chitwanix Student Partners (CSP), Chitwanix Associate Program etc. He said that Chitwanix community is slowly growing and going to make vast impact in the coming future. After his presentation about Chitwanix the event host called Mr. Saroj Dhakal to give presentation and short talk on Nepali Linux & Nirvikalpa. Saroj Dhakal started his presentation form the question: How many of here know about Nepali Linux? And most of the hands inside the hall were raised because Nepali Linux was that first Linux based Nepali operating system which was developed with the help of Nepal Government and Madhan Bhandari Pustaklaya. He said that Nepali Linux is going to reborn again and new version will be released soon. He also gave an short introduction about Nirvikalpa “Nirvikalpa is an collection of Open Source software which can be used at Microsoft Windows too”.

After Saroj Dhakal presentation on Nepali Linux and Nirvikalpa, Host of an event called Linux Terminal Server Project (LTSP) Nepal Team for their introduction and presentation. LTSP project was lead by the students of Kathmandu University (KU) and all of them where an active member of their college open source community named Kathmandu University Open Source Community (KUOSC). Presenter shows their progress on LTSP and some images.

Now, It was a time for the presentation of Wikipedia (Nepali). Mr. Ganesh Poudel one of the active Wikimedians of Nepal was called on stage for the presentation. His presentation was like progress report and story sharing. He shares how Nepali Wikipedia community is growing day by day and he also gave an open invitation to all participant of Software Freedom Day 2014, to be volunteer of an Nepali Wikipedia and request everyone for the help.

Presentation session continues, after 15 minutes presentation of Nepali Wikipedia, Host called Mr. Nikesh Balami from Open Knowledge Nepal to give short introduction of Open Knowledge Community. Nikesh Balami topic was focused on Open Data and CKAN. “CKAN is a tool for making open data websites, It helps you manage and publish collections of data”. He also explain the future plans of Open Knowledge Nepal and shows different example website, which was made by using CKAN tools.

Host of an event called Mr. Nirab Pudasaini from Open Street Map (OSM) Nepal after Nikesh Balami presentation. Mr. Pudasaini define what is Open Street Map (OSM) at first and also define how student, company, researcher etc. can use OSM for their project. He explain how Kathmandu Living Labs (KLL) is supporting Open Street Map (OSM) Nepal and he specially thanks student because most of the student where helping them for mapping. He shows his interest of taking Open Street Map (OSM) Nepal outside of Kathmandu valley.

Event was running is the cool way but everyone get shocked suddenly when the host called team of PHP Developer Nepal because most of the participant was unknown about the group. The team started their presentation by defining how PHP language is helping to keep web secure. They also gave a short demo on how we can enjoy and utilized all software if it was made in web.

Now, the turn was of Mozilla Nepal and representative of Mozilla, Mr. Surit Aryal was there for the presentation. He gave a presentation on the topic “Moz Stumbler” and “Mozilla Location Services”. He introduced what kinds of project it is and also make clear how public can get benefits from it.

Again, after the presentation of Mozilla Nepal, Mr. Sakin Shrestha representative of WordPress Nepal was called on the stage for the presentation. He shares how WordPress Nepal is getting global recognition and also discuss a little bit about Wordcamp Nepal. He says that number of WordPress users in Nepal is increasing and suggest participant to use WordPress CMS which is free and secure.

Turning and interesting point of the event came because it was a time of Ruby Developer Nepal presentation. Bunch of team was there wearing “I Love Ruby” t-shirt. The team gave an introductory presentation about what does Ruby Developer Nepal do, what kinds of programming language is Ruby, what are its important etc. Everyone inside the hall was listening their presentation because the way they present was very unique. They suggested different link for the visit.

After, Ruby presentation host of an event called Mr. Bhupal Sapkota from Google Developer Group (GDG) Kathmandu. Mr. Sapkota defines what kinds of event does GDG organized and how to join GDG group. He also called his teammates Mr. Saroj Dhakal on the stage again to give presentation about Google Business Group (GBG) Nepal. He shares GBG journey, How GBG was started and what kinds of platform is it.

Final presentation of event was about Open Hardware and Mr. Dipesh Kharel from Robotic Association of Nepal (RAN) was called on the stage to give a short talk on it. He first defines how robotic can help us, and also make clear why Robotic Association of Nepali (RAN) represent Open Hardware. He said that in Nepal it was really difficult to work on hardware but nowadays the people working on Open Hardware is increasing.

With that last presentation on Open Hardware both the host Mr. Rajan Kandel and Ms. Shristi Baral handover the MIC in the hand of President of Free and Open Source Software (FOSS) Nepal community, Mr. Subir B. Pradhanang and says thanks to all participant for joining the event. Then Mr. Pradhanang introduced the Software Freedom Day 2014 organizing team (Only new faces) and thanks them for organizing such an wonderful event.

Then again he also handover the MIC to Mr. Hempla Shrestha for the Panel Discussion and Open Discussion. During the panel discussion everyone changes their sitting arrangement and made one big circle by using chairs. Everyone introduced themselves one by one at first round of the discussion. In the second round of the panel discussion community were asked to shared their view, What kinds of problem are they facing inside the community, what are their future planning etc.
At the last round of the Discussion Session, participant were asked to come up with new ideas, which they want to work in the coming future and must say that round was brilliant. Six participant present their ideas and other interested participant who like that ideas joins them. Hope they will be working till the last on the same ideas in the coming future.

Event was ended and everyone moved for the lunch.

OKFest, Berlin, 15th-17th July

It has been a long time we didn’t post any news, not because there is nothing to document from Berlin where the Open Steps journey came to its end one month ago, but for the simple reason we were very busy preparing the next steps of our project. You should have heard about that if you happened to attend the OKFest last week. The OKFN had the great idea to organize its annual event in Berlin this year, and we were more than happy to be part of this international encounter, together with more than 1000 participants from all over the world. Of course, we met again numerous activists we got to know during these last 12 months in their home country. Seeing them at the other side of the world was a very warm feeling and simultaneously the best opportunity to follow up the latest status of their projects we documented on the way. But, as the list of OK related projects don’t stop there, we could discover many new faces of the community and are now eager to blog about them.

DSCF8935  DSCF8932

The 3-days conference took place in the Kulturbrauerei, an old beer fabric made of red bricks in the Prenzlauer Berg district of the German capital. On the first day, an OK fair was scheduled and Open Steps was invited to have a stand. Perfectopportunity to present what we have learnt during the journey, discussing about the difficulties arising in Europe, India, Asia or South-America and sharing our overview on what have already been achieved. You can read our final report here. At the same time, we introduced the newest version of our directory, renamed Open Knowledge Directory which consists on mapping individuals and organisations from all over the world and actively supporting the Open Knowledge principles, what ever they are focusing on: Open Data, Open Government, Open Science, Open Source, Data journalism or other related fields. The tool directly responds to challenges we experienced on first hand by travelling: first, it has the goal to increase worldwide the visibility of OK projects, both inside and outside the OK community (because it is still difficult for an uninitiated public to have an overview on what is going on). Secondly, to facilitate the communication and collaboration across borders (it is obvious to say there is a big potential to share forces and know-how). If you haven’t taken a look on it yet, please check it now, fill out the 2-minutes form to be listed, and spread the word!

DSCF9003The programme of the rest of the festival was full of interesting sessions, which made very difficult to choose some of them, not speaking about the unconference which happened at itsside and the fact that we were volunteering during the 3 days, helping around in order to make such an amazing event possible and running from room to room. On the last day, Neelie Kroes, EU-Commissioner for Digital Agenda and Vice-President of the Commission, honoured us with her speech, encouraging all of us to keep working hard and promising good perspectives regarding the political support in Europe, starting an Erasmus for Open Data in September and granting funds through the programme Horizon 2020 and the FI-WARE initiative. Brilliant!