CJ Fearnley

CJ Fearnley was an early leader in the adoption and implementation of Linux and Free and Open Source Software (FOSS) in Philadelphia. In 1993, he recognized the emerging value of the Linux operating system. Through his leadership position in the Philadelphia Area Computer Society (PACS), he began introducing Linux to organizations in the Greater Philadelphia region. At PACS, he organized monthly presentations on Linux and FOSS and wrote 29 columns in the organization's print periodical, The Databus. He then founded and helped build Philadelphia's premiere Linux user group, the Philadelphia area Linux User Group (PLUG), where he continues to facilitate its first Wednesday meetings. After helping to establish a community and culture for Linux and FOSS in Philadelphia, CJ started building his first company, LinuxForce, to be the ``go-to'' firm for organizations wanting to realize the promise and power of Linux. LinuxForce is a leading technology services provider specializing in the development, implementation, management and support of Linux-based systems, with a particular expertise in Debian GNU/Linux. LinuxForce provides remote Linux systems management services to clients including The Franklin Institute Science Museum, and the Aker Philadelphia Shipyard. CJ is a member of the Buckminster Fuller Institute, the Synergetics Collaborative (SNEC), the Association for Computing Machinery (ACM), IEEE (Institute of Electrical and Electronics Engineers), and the IEEE Technology Management Council. He received his BA in Mathematical Sciences and Philosophy from Binghamton University in 1989 where he was a Regents Scholar and has done graduate work at Drexel University. CJ was named to the Philadelphia Business Journal's 2006 ``40 Under 40'' list as one of the region's most accomplished young professionals.
CJ Fearnley was an early leader in the adoption and implementation of Linux and Free and Open Source Software (FOSS) in Philadelphia. In 1993, he recognized the emerging value of the Linux operating system. Through his leadership position in the Philadelphia Area Computer Society (PACS), he began introducing Linux to organizations in the Greater Philadelphia region. At PACS, he organized monthly presentations on Linux and FOSS and wrote 29 columns in the organization's print periodical, The Databus. He then founded and helped build Philadelphia's premiere Linux user group, the Philadelphia area Linux User Group (PLUG), where he continues to facilitate its first Wednesday meetings. After helping to establish a community and culture for Linux and FOSS in Philadelphia, CJ started building his first company, LinuxForce, to be the ``go-to'' firm for organizations wanting to realize the promise and power of Linux. LinuxForce is a leading technology services provider specializing in the development, implementation, management and support of Linux-based systems, with a particular expertise in Debian GNU/Linux. LinuxForce provides remote Linux systems management services to clients including The Franklin Institute Science Museum, and the Aker Philadelphia Shipyard. CJ is a member of the Buckminster Fuller Institute, the Synergetics Collaborative (SNEC), the Association for Computing Machinery (ACM), IEEE (Institute of Electrical and Electronics Engineers), and the IEEE Technology Management Council. He received his BA in Mathematical Sciences and Philosophy from Binghamton University in 1989 where he was a Regents Scholar and has done graduate work at Drexel University. CJ was named to the Philadelphia Business Journal's 2006 ``40 Under 40'' list as one of the region's most accomplished young professionals.

Anticipating the Emerging Technologies for the Enterprise (ETE 2010) Event

I will be attending the 5th Annual Emerging Technologies for the Enterprise Conference (ETE 2010) this Thursday and Friday, April 8-9, 2010. The event is billed for “developers, architects, and IT executives” and attempts to provide a dynamic forum for “emerging technology and Open Source”.

I look forward to seeing Robert C. (Uncle Bob) Martin‘s keynote on “Bad Code, Craftsmanship, Engineering, and Certification”, a panel discussion on “Open source is a commercial enterprise”, another panel on “Social Media: Why should I care?”, a second Bob Martin presentation on “Agility and Architecture”, Mary Poppendieck on “Cost Center Disease”, Bonnie Aumann on managing developers, Michael Coté’s keynote on “The Pragmatic Cloud”, Geir Magnusson Jr. on “Project Voldemort”, and Brian McCallister on “Failure Happens” (one of the very few talks on systems administration). Then there’s an interesting panel on “Battle of the Frameworks II” (its predecessor the ETE 2008 “Web Framework Shootout” is on-line in two parts I (here) and II (here). Hopefully this year people will respect each others’ frameworks more and have a mature discussion about the tradeoffs that each incurs. I was impressed with Marjan Bace, the moderator, for helping facilitate some reasonable comments amidst too much hyperbole and for brining the discussion to an effective conclusion). Finally, I think I’ll attend the presentations by Molly Holzschlag on “Demystifiying HTML5”, David A. Black on some CS (computer science) precepts, and Audrey Troutt on “Influencing your way to agile”.

It looks like it will be an engaging two-day event. I’m looking forward to meeting many leaders in the local Philly and broader FOSS (Free and Open Source Software) technology community and getting to downtown Philly for some out of the office learning and networking.

While I’m mentioning events, for those who do not know, I moderate the Q&A for the first Wednesday of the month meeting of the Philadelphia Linux User’s Group (PLUG) which will be on “Functional Programming Using Haskell” this month. It is going to be a busy week! If you plan to attend either event, I’ll see you there.

Posted by CJ Fearnley in FOSS Community, News, 2 comments

The Nature and Importance of Source Code and Learning Programming with Python

Last year a client asked us for advice on getting started with programming. So I thought I’d share some thoughts about programming, its relationship with FOSS (Free and Open Source Software) management and why Python is a good language for learning programming including some great on-line resources. But first I want to make sure our business-oriented readers understand the nature and importance of source code.

The “source” aka “the code” provides a language in which computer users can create or change software. One does not have to be a programmer to work on the code. In fact, every computer user is, ipso facto, a programmer! Menus, web interfaces, and graphical user interfaces (GUIs) are some of the more facile “languages” for computer programming that everyone, even children, can readily learn and use. Of course, building complex software systems requires a more expressive specification language than a web form, for instance, can provide.

Although all computer software is specified with source code, FOSS systems are unique in that the source code is made available with the software. In contradistinction, software lock-in or vendor lock-in describes the unfortunately all too common practice of many organizations to block access to their source code.

Having access to the source code provides huge operational benefits. For one, the source can be used to understand how the software works: it is a form of software documentation (indeed, it is the most definitive form of software documentation possible!). Also, code can be easily changed to add diagnostics or to test a possible solution to a problem or to modify or add functionality. In addition, the source is a language both for specifying features to the computer and for discussing computing with others. So most mature FOSS languages have vibrant support communities in which one can participate, learn and get help.

The source is a tool: a powerful, multi-purpose, critically important tool.

Since LinuxForce focuses on FOSS, we are able to take full advantage of the availability of the code. We are always working with the source! Since most of our work is systems administration, we usually “program” configuration files. However, we also write systems software and scripts and we support software developers extensively, so we have a persistent, deep, and productive relationship with code.

But what to suggest to someone like our customer who wants to learn programming?

I remembered seeing a blurb in Linux Journal referencing an article they published in May 2000 by Eric Raymond entitled "Why Python" which argues persuasively for the virtues of the programming language Python. I had often felt that Perl‘s idiosyncrasies made it difficult to use, so Eric’s critique of Perl and accolades for Python were convincing to me. In addition, I follow FOSS mathematics software and I was aware that Sage is a Python “glue” to more than fifty FOSS math libraries. I’ve been meaning to look into Python so I could use Sage. Another pull comes from my work at LinuxForce where we use a lot of Python-based software including mailman, fail2ban, Plone, and several tools used for virtual machine management such as kvm, virtinst and xen-tools. Python has a huge software repository and community. So one is likely to find good libraries to build upon (thus avoiding the extra learning curve of building everything from scratch). Python is an interpreted language which makes it easier to debug and use so the learning process is smoother.

To finish the recommendation, I just needed to find some on-line resources. First, Kirby Urner suggested these two: Wikieducator’s Python Tutorials and "Mathematics for the Digital Age and Programming in Python". Then, I checked out the Massachusetts Institute of Technology’s (MIT) OpenCourseWare which provides extensive course materials for many of their classes (I’ve already watched the full video set for a couple of MIT’s courses including the legendary Walter Lewin’s "Classical Mechanics" and have been very impressed by the quality and content of their materials). After nearly 30 years of introducing students to programming with Scheme, MIT switched to Python in 2008! The materials for their introductory Python-based course "6.00 Introduction to Computer Science and Programming" are very thorough, accessible and helpful. Their free on-line materials include the full video lectures of the class plus assignments, sample test problems, class handouts, and an excellent Readings section with references to "the Python Tutorial" and a very good free on-line textbook "How to Think Like a Computer Scientist: Learning with Python".

In conclusion, if you or anyone you know wants to learn how to program computers, I recommend starting with Python using MIT’s on-line course materials supplemented with the other on-line resources mentioned above (and summarized in the table below). I’ve now watched more than half of the videos from the MIT 6.00 course and I’ve worked through several of their assignments: this is a great course! Even with nearly three decades experience programming including a couple of college-level courses in the 1980s, I’m finding the class is more than just good review for me: I’ve learned a few new things (in particular, dynamic programming and the knapsack problem). Python’s clean syntax and elegant design will help as one delves into writing code for the first time. Its extensive libraries and repositories will support the application of one’s newly acquired computing skills to solve problems in the area of the student’s special interests whatever they may be … and that’s the way we learn best: by doing something that we personally care about!

Summary of On-Line Resources for Learning Python

Posted by CJ Fearnley in Programming, 0 comments

Some thoughts on best practices for SMTP blocking of e-mail spam

Blocking e-mail spam at the time of SMTP (Simple Mail Transfer Protocol) transfer has become a best practice. There is no point wasting precious bandwidth & disk space and spending time browsing a huge spambox when most of the incoming flow is clearly spam. At LinuxForce our e-mail hygiene service, LinuxForceMail, makes extensive use of SMTP blocking techniques (using free and open source software such as Exim, Clam AV, SpamAssassin and Policyd-weight). But we are extremely careful to only block sites and e-mails that are so “spammy” that we are justified in blocking it. That doesn’t prevent false positives, but it keeps them to a minimum.

Recently we investigated an incident where one of our users had their e-mail blocked by another company’s anti-spam system. In investigating the problem, we learned that some vendors support an option to block e-mail whose Received header is on a blacklist (in our case it was Barracuda, but other vendors are also guilty). Let me be blunt: this is boneheaded, but the reason is subtle so I can understand how the mistake might be made.

First, blocking senders appearing on a blacklist at SMTP time is good practice. But to understand why blocking Received headers at SMTP time is bad, it is important to understand how e-mail transport works. The sending system opens a TCP/IP connection from a particular IP address. That IP address should be checked against blacklists. And other tests on the envelope can help identify spam. But the message headers including the Received header are not so definite. We shall see that even a blacklisted IP in these headers may be legitimate. So blocking such e-mail incurs unnecessary risks.

The problem occurs when a user of an ISP (Internet Service Provider) sends an e-mail from home, they are typically using a transient, “dynamic” IP address. Indeed it is possible that their IP address has just changed. Since the new address may have been previously used by someone infected with a virus sending out spam, this “new” IP address may be on the blacklists. So, due to no fault of your own, you have a blacklisted IP address (I will suppress my urge to rant for IPv6 when everyone can finally have their own IP address and be responsible for its security).

Now, when you send an e-mail through your ISP’s mail server, it records your (blacklisted) IP as the first Received header. So your (presumably secure) system sending a legitimate message through your ISP’s legitimate, authenticating mail server is blacklisted by your recipients’ overambitious anti-spam system. Ouch. That is why blocking such an e-mail is just wrong. This kind of blocking creates annoying unnecessary complications for the users and admins at both sides. Using e-mail filtering to put such e-mails into a spam folder would be a reasonable way to handle the situation. Filtering is able to handle false positives whereas blocking generates unrecoverable errors.

Do not block e-mail based on the Received header!

Posted by CJ Fearnley in Security, Systems Management, Tech Notes, Ubuntu, 0 comments

Given 250,000 tools on the shelf, how do you manage them?

Although I haven’t seen a thoroughly researched study, I figure there must be at least 250,000 FOSS (Free and Open Source Software) tools available to every systems administrator on the planet (230,000 at SourceForge + 15,000 at Launchpad + 12,000 at CodePlex + 5,000 at Google Code and that doesn’t count the Linux kernel or any of the myriad other self-hosted projects). These 250,000+ resources comprise the full “toolbox” that admins can use for building solutions with FOSS; they represent the FOSS equivalent of COTS (Commercial Off-The-Shelf). Of course, if you add open source but non-free or commercial tools, the problem explodes combinatorially.

How can a systems administrator support the largest possible subset of these “on the shelf” resources to best service the next need from a stakeholder (like the boss or a new client)?

First let me emphasize the difficulty of the task with a list of items that systems administrators and systems management firms like LinuxForce are expected to do whenever a stakeholder presents a software need:

  • Find and Evaluate software that can meet the need:
    • Identify several candidate applications that might meet the business requirements for a given project, function, or need
    • Research the options to assess their ability to meet the requirements (actually we, the systems administrators of the world, are actually expected to know which tool is “best of breed”: just from our past experience. The false assumption is, if it isn’t well known it must not be any good. The long tail applies to the 250,000+ FOSS tools also!). In our experience such research is essential, unfortunately, there is rarely enough budget to carefully explore the options.
    • Install the tool(s) in a “sandbox” to allow the stakeholder to “try it out”
    • Select a tool to use or look for more options
  • Put the tool into production
    • Read the docs to identify best practices for the software’s configuration
    • Prepare an installation plan that will address (as best as possible) any upgrade glitches (yes, you have to anticipate them now or suffer the consequences later!) so that you’re prepared for when a security advisory is released (or when the stakeholder starts begging for features from a new release)
    • Figure out a support plan to handle the inevitable questions that will arise during operations
    • Integrate these considerations into the process of either installing a package or using the “make, configure, make install” steps that most FOSS tools provide for installation
    • Carefully document the “as built” configuration including all assumptions and anticipated glitches to help yourself or future admins during the maintenance phase
  • On-Going Maintenance
    • Monitor the software
    • Subscribe to any relevant security mailing lists for the software so that you are apprised when a security (or other major) problem is detected
    • Track general trends relating to the software and its alternatives so that you are ready to respond if the project goes dormant or is eclipsed by newer, superior technology.
    • Upgrade routinely

About 15 years ago I noticed that the explosion of ready to use FOSS tools plus the trend toward general purpose tools and away from custom software was leading to a combinatorial crisis in software maintenance. I saw that it was the systems administrator’s responsibility to address the situation.

It has become apparent to me that the solution would require use of convention, standards and policy to reduce the complexity of the problem to manageable proportions. I searched for the most “standardized” conventions and policy-enforcing environment that would also provide the most flexible access to the most FOSS tools. The solution I found is Debian/GNU Linux, the universal operating system (although Ubuntu and other Debian derivatives also provide most of these benefits as well).

Debian simplifies the software evaluation process (apt-get [search|show]). Debian simplifies installation (apt-get install), security and new version upgrades (apt-get [upgrade|dist-upgrade]). Debian uses conventions and packages to simplify identifying best practices for administering the software (/usr/share/doc/[package]/, /var/lib/dpkg/info/[package].postinst, and wikis, mailings lists, bug reports, etc.). But the key benefit for managing the combinatorial explosion of FOSS tools is the Debian community’s value of striving to configure each package to automatically support the most common use cases while also providing support for unusual configurations (so you save tons of time in configuring the software).

In summary, the Debian/GNU Linux system provides the infrastructure needed to manage the combinatorial explosion of off the shelf FOSS tools cost effectively. If you have to service a lot of users, customers, or clients with challenging, diverse needs, I think Debian is the most cost effective way to meet their needs and deliver quality maintenance on an on-going basis year after year after year.

Posted by CJ Fearnley in Debian, Systems Management, 1 comment

A FOSS Perspective On Richard Schaeffer’s Three Tactics For Computer Security

Federal Computer Week published a great, succinct quote from Richard Schaeffer Jr., the NSA’s (National Security Agency) information assurance director, on three approaches that are effective in protecting systems from security attacks:

We believe that if one institutes best practices, proper configurations
[and] good network monitoring that a system ought to be able to
withstand about 80 percent of the commonly known attack mechanisms
against systems today, Schaeffer said in his testimony. You can
actually harden your network environment to raise the bar such that
the adversary has to resort to much, much more sophisticated means,
thereby raising the risk of detection.”

Taking Schaeffer’s three tactics as our lead, here is a FOSS perspective on these protection mechanisms:

Best practices implies community effort: discussing, sharing and collectively building understanding and techniques for managing systems and their software components. FOSS (Free and Open Source Software) communities develop, discuss and share these best practices in their project support and development forums. Debian’s package management system implements some of these best practices in the operating system itself thereby allowing users who do not participate in the development and support communities to realize the benefits of best practices without understanding or even knowing that they exist. This is one of the important benefits of policy- and package-based operating systems like Debian and Ubuntu.

Proper configuration is the tactical implementation of best practices. Audit is a critical element here. Debian packages can use their postinst scripts (which are run after a package is installed, upgraded, or re-installed) to audit and sometimes even automatically fix configuration problems. Right now, attentive, diligent systems administrators, i.e., human beings, are required to ensure proper configuration as no vendor — not even Debian — has managed to automate the validation let alone automatically fix bad configurations. I think this is an area where the FOSS community can lead by considering and adopting innovations for ensuring proper configuration of software.

Good network monitoring invokes the discipline of knowing what services are running and investigating when service interruptions occur. Monitoring can contribute to configuration auditing and can help focus one’s efforts on any best practices that should be considered. That is, monitoring helps by engaging critical thinking and building a tactile awareness of the network — what it does and what is exposed to the activities of a frequently malicious Internet. So, like proper configurations, monitoring requires diligent, attentive systems administrators to maintain security. LinuxForce’s Remote Responder services builds best practices around three essential FOSS tools for good network monitoring: Nagios, Munin, and Logcheck.

Posted by CJ Fearnley in Security, Systems Management, 0 comments

Seven Observations On Software Maintenance And FOSS

The November 2009 issue of Communications of the ACM (CACM) has a very interesting article by Paul Stachour and David Collier-Brown entitled “You Don’t Know Jack About Software Maintenance”. The authors argue energetically for using versioned data structures and “continuous upgrading” to improve the state of the art of software maintenance.

The piece got me thinking about FOSS (Free and Open Source Software) and “continuous upgrading”. Here are seven observations on FOSS software maintenance that occurred to me as I reflected on the CACM article:

  1. FOSS projects “continuously” apply bug fixes and feature enhancements at no additional cost to their users. By applying these improvements “continuously”, the user reaps a steady stream of “interest payments” providing ever-improving security, performance, and functionality.
  2. Since FOSS incurs no licensing or license management costs, upgrading FOSS is not hindered by capital expenses.
  3. Typically support in FOSS projects is focused on the current stable version. Therefore, upgrading to the current stable version is the preferred way to receive the best support from FOSS communities.
  4. One of the key reasons behind Debian‘s strong track record of “continuous upgrading” is its way of handling the tricky issues involved with dependent library upgrades (such as libc6, libssl.so.0.9.8, & etc). The chapter on Shared Libraries in the Debian Policy Manual details a proven method to effectively handle library upgrade issues (including its sophisticated handling of versions).
  5. When upgrading is applied routinely and “continuously”, it becomes crucial to support customizations across upgrades which can be one of the biggest obstacles to a smooth upgrade (see my earlier post on customization and upgradeability). One reason for Debian’s effectiveness in this regard is its robust configuration file handling policy.
  6. It is worth noting that the “continuous” implied here is not the one emphasized in dictionaries (which takes its nuances from the mathematical / physics concept of “no interruptions” and the epsilon-delta definition that students of Calculus learn). That concept of “continuous” is impossible in systems administration which is necessarily discrete as are all computer operations. The connotation required here is, perhaps, “unending”, or “eternal” or somesuch.
  7. The “right” frequency for “continuous” upgrades is a complex tradeoff between business requirements and upgrade infrastructure maturity. Debian and Ubuntu provide vary mature support for “continuous upgrading”. They support the upgrade of production servers through release after release after major release with minimal downtime or risk of a glitch that could affect users. Their current release frequency of about 2 years may be the best we can do given the current state of the art of software maintenance. I hope we can learn to increase the frequency as better engineered upgrade policies are developed.

I prefer the name “eternally regenerative software administration” over “continuous upgrading”. It avoids the philosophical problems with the word “continuous” and emphasizes the active, “ecological” approach needed to envision the engineering of “regenerativity” in software. By that I mean software maintenance should involve building the system so each new version enables installation of the next while facilitating management of any customizations and integration with other software (including libraries and other “helper” applications). Regenerativity is the process of growth and change used by Nature itself. Software maintenance needs to follow similar principles.

Posted by CJ Fearnley in Debian, Eternally Regenerative Software Administration, Ubuntu, 0 comments

Crossroads in FOSS Projects: Some Business Considerations

At our Seminar last month, Managing FOSS to Lower Costs and Achieve Business Results, several participants asked about the dynamics of FOSS (Free and Open Source Software) projects that reach a crossroads (a failure, a merger, loss of key personnel, etc). I had not expected that concern because with commercial software, it seems to me, the problem is more severe. When you have the source code and the right to modify and redistribute it, the source gives many more options (and its freedoms provide many more protections) than when commercial software goes bankrupt or gets bought by a competitor for instance.

But the reason for the questions may be a lack of understanding about how FOSS projects work. They involve individual human beings, perhaps just a single person or, more likely, several people from many organizations and even different cultures around the world joined in common purpose. For various cultural reasons, the project may be “owned” by an entity — usually a non-profit, but some are for-profit or even government owned, while others may simply be an “ad hoc initiative”. Some projects have explicit constitutions and defined processes for organizing the work and handling problems others are more informal.

At any time, any human social structure can experience a crossroads that could lead it to fail suddenly or wither on the vine in a gradual descent into “oblivion”. The cause of the failure will shape the results, but a very common situation is that conflicting visions or approaches for the project result in a “fork”. Then a sub-group of the original project takes the source code and starts a “new” project to develop the code in a new direction. Sometimes the original project “dies” and sometimes both continue resulting in two projects. Since multiple FOSS projects serving the same function or market incur inefficiencies due to duplicate development, there is a strong cultural value in the FOSS world to try to find a way to accommodate everyone in the project and prevent forks. When it works, the result is great software that meets everyone’s needs. But the reality is that often it is more effective to have multiple implementations of the same functionality so that each can be optimized for distinct objectives. Frequently one cannot know which approach will be best until many years of development and evaluation have transpired.

I recently learned about a FOSS project that forked when a friend asked me to copy some files to his new “My Book Essential”, a Western Digital product that provides 1TB of USB (Universal Serial Bus) storage. The My Book uses the poorly documented, non-free NTFS (New Technology File System). Linux has three projects that support NTFS: an in kernel driver, ntfsprogs (the Linux-NTFS project), and NTFS-3G. It turns out that all three were available for my Debian Lenny (5.0.3) system. First, I tried the in kernel support and learned that it was still read-only. Then I tried ntfsprogs which failed to mount the My Book:

NTFS-fs error (device sdc1): load_system_files(): Volume is dirty. Mounting read-only. Run chkdsk and mount in Windows.

I realized that since it was a new device it probably did not ship from the factory with a dirty volume. It was probably a bug. So I tried NTFS-3G which worked very well. In my research of the situation I was able to determine that both NTFS-3G and Linux-NTFS are under active development and have features missing from the other. So each has value and I’m glad my distribution included both. In Debian Lenny, the NTFS-3G driver has better support for writing files.

This illustrates one of the benefits of a crossroads in a FOSS project: you can end up with two good tools to add to your toolbox!

Posted by CJ Fearnley in Debian, FOSS Community, Tech Notes, 0 comments

Forthcoming Design Science Symposium and Systems Administration

Ever since I started doing systems administration, I’ve been interested in applying Buckminster “Bucky” Fuller’s comprehensive anticipatory design science to the task. Bucky extolled the virtues of a comprehensive approach. Put bluntly, the comprehensive perspective says “since what you don’t attend to will get ya, you had better consider everything. Or said positively: only by considering all elements in a system and all its interrelationships with other (relevant) systems can you ensure reliable on-going operation. In addition, the proactive or anticipatory approach is essential to prevent system complexities from impacting operations. I think of design as human initiative-taking to provide a service or artifact and science as experience-based learning. Evidently, design science is implicit in the work of systems administrators. I think the discipline of comprehensive anticipatory design science can be positively applied to the practice of systems administration.

So I am excited that on November 14 & 15, I will be attending the Synergetics Collaborative’s two-day Symposium on “Design Science” at the Rhode Island School of Design (RISD). Together with the organizing committee, we (I serve as volunteer Executive Director of the Synergetics Collaborative) have put together a program that will develop a deeper understanding of design science. So even though computer systems administration is not on the agenda, I think anyone with a problem-solving focus in their work (including systems administrators) would benefit by attending.

To find out more about this exciting event visit the Design Science: Nature’s Problem Solving Method Symposium home page here.

Posted by CJ Fearnley in News, 0 comments

Slides Available From Our Managing FOSS Seminar

Last Thursday LinuxForce hosted a seminar on Managing Free and Open Source Software (FOSS) for Business Results. The Seminar home page now has links to the slides from the event. Specifically, there are four sets of slides available:

Please let us know if you have any questions about the content in the slides or from the seminar itself.

Posted by CJ Fearnley in News, 0 comments