Welcome to Siva's Blog

~-Scribbles by Sivananda Hanumanthu
My experiences and learnings on Technology, Leadership, Domains, Life and on various topics as a reference!
What you can expect here, it could be something on Java, J2EE, Databases, or altogether on a newer Programming language, Software Engineering Best Practices, Software Architecture, SOA, REST, Web Services, Micro Services, APIs, Technical Architecture, Design, Programming, Cloud, Application Security, Artificial Intelligence, Machine Learning, Big data and Analytics, Integrations, Middleware, Continuous Delivery, DevOps, Cyber Security, Application Security, QA/QE, Automations, Emerging Technologies, B2B, B2C, ERP, SCM, PLM, FinTech, IoT, RegTech or any other domain, Tips & Traps, News, Books, Life experiences, Notes, latest trends and many more...
Showing posts with label information security. Show all posts
Showing posts with label information security. Show all posts

Saturday, December 4, 2021

DevSecOps - Reference Architecture and Recommendations

DevSecOps - Reference Architecture and Recommendations

What is DevSecOps?

DevSecOps is the philosophy of integrating security practices within the DevOps pipeline, ensuring two seemingly opposed goals: speed of delivery and secure code. Critical security issues are dealt with as they become apparent at almost all stages of the SDLC process, not just after a threat or compromise has occurred. It’s not only about additional tools that automate tasks, but also about the mentality of your developers.

Recommendations for a comprehensive DevSecOps?

  1. IDE Plugins — IDE extensions that can work like spellcheck and help to avoid basic mistakes at the earliest stage of coding (IDE is a place/program where devs write their code for those who don’t know). The most popular ones are probably DevSkimJFrog Eclipse, and Snyk.
  2. Pre-Commit Hooks — Tools from this category prevent you from committing sensitive information like credentials into your code management platform. There are some open-source options available, like git-houndgit-secrets, and repo-supervisor.
  3. Secrets Management Tools allow you to control which service has access to what password specifically. Big players like AWS, Microsoft, and Google have their solutions in this space, but you should use cloud-provider-agnostic ones if you have multi-cloud or hybrid-cloud in place.
  4. Static Application Security Testing (SAST) is about checking source-code (when the app is not running). There are many free & commercial tools in the space (see here), as the category is over a decade old. Unfortunately, they often result in a lot of false positives, and can’t be applied to all coding languages. What’s worse is that they take hours (or even days) to run, so the best practice is to do incremental code tests during the weekdays and scan the whole code during the weekend.
  5. Source Composition Analysis (SCA) tools are straightforward — they look at libraries that you use in your project and flag the ones with known vulnerabilities. There are dozens of them on the market, and they are sometimes offered as a feature of different products — e.g. GitHub.
  6. Dynamic Application Security Testing (DAST) is the next one in the security chain, and the first one testing running applications (not the source code as SAST — you can read about other differences here). It provides less false positives than SAST but is similarly time-consuming.
  7. Interactive Application Security Testing (IAST) combines SAST and DAST elements by placing an agent inside the application and performing real-time analysis anywhere in the development process. As a result, the test covers both the source code and all the other external elements like libraries and APIs (this wasn’t possible with SAST or DAST, so the outcomes are more accurate). However, this kind of testing can have an adverse impact on the performance of the app.
  8. Secure infrastructure as code — As containers are gaining popularity, they become an object of interest for malware producers. Therefore you need to scan Docker images that you download from public repositories, and tools like Clair will highlight any potential vulnerabilities.
  9. Compliance as code tools will turn your compliance rules and policy requirements into automated tests. To make it possible your devs need to translate human-readable rules received from non-tech people into code, and compliance-as-a-code tools should do the rest (point out where you are breaking the rules or block updates if they are not in line with your policies).
  10. Runtime application self-protection (RASP) allows applications to run continuous security checks and react to attacks in real-time by getting rid of the attacker (e.g. closing his session) and alerting your team about the attack. Similarly to IAST, it can hurt app performance. It’s 4th testing category that I show in the pipeline (after SAST, DAST, and IAST) and you should have at least two of them in your stack.
  11. Web Application Firewall (WAF) lets you define specific network rules for a web application and filter, monitor, and block HTTP traffic to and from a web service when it corresponds to known patterns of attacks like, e.g. SQL injection. All big cloud providers like GoogleAWS and Microsoft have got their WAF, but there are also specialised companies like Cloudflare, Imperva and Wallarm, for example.
  12. Monitoring tools — as mentioned in my DevOps guide, monitoring is a crucial part of the DevOps manifesto. DevSecOps takes it to the next level and covers not only things like downtime, but also security threats.
  13. Chaos engineering. Tools from this category allow you to test your app under different scenarios and patch your holes before problems emerge. “Breaking things on purpose is preferable to be surprised when things break” as said by Mathias Lafeldt from Gremlin.
  14. Vulnerability management — these tools help you identify the holes in your security systems. They classify weaknesses by the potential impact of malicious attacks taking advantage of them so that you can focus on fixing the most dangerous ones. Some of the tools might come with addons automatically fixing found bugs. This category is full of open source solutions, and here you can find the top 20.

Reference Architectures of DevSecOps?

Refer: https://cdn2.hubspot.net/hubfs/4132678/Resources/DevSecOps%20Reference%20Architectures%20-%20March%202018.pdf

References:
https://medium.com/inside-inovo/devsecops-explained-venture-capital-perspective-cb5593c85b4e

    Friday, April 30, 2021

    9 Pillers of DevOps Best Practices

     


    Leadership Practices for DevOps

    • Leaders demonstrate a long-term vision for organizational direction and team direction.
    • Leaders intellectually stimulate the team by encouraging them to ask new questions and question basic assumptions about the work.
    • Leaders provide inspirational communication that inspires pride in being part of the team, says positive things about the team, inspires passion and motivation and encourages people to see that change brings opportunities.
    • Leaders demonstrate support by considering others’ personal feelings before acting, being thoughtful of others’ personal needs and caring about individuals’ interests.
    • Leaders promote personal recognition by commending teams for better-than-average work, acknowledging improvements in the quality of work and personally complimenting individuals’ outstanding work.

    Collaborative Culture Practices for DevOps

    • The culture encourages cross-functional collaboration and shared responsibilities and avoids silos between Dev, Ops and QA.
    • The culture encourages learning from failures and cooperation between departments.
    • Communication flows fluidly across the end-to-end cross-functional team using collaboration tools where appropriate (for example Slack, HipChat, Yammer).
    • The DevOps system is created by an expert team, and reviewed by a coalition of stakeholders including Dev, Ops and QA.
    • Changes to end-to-end DevOps workflows are led by an expert team, and reviewed by a coalition of stakeholders including Dev, Ops and QA.
    • DevOps system changes follow a phased process to ensure the changes do not disturb the current DevOps operation. Examples of implementation phases include: proof of concept (POC) phase in a test environment, limited production and deployment to all live environments.
    • Key performance indicators (KPIs) are set and monitored by the entire team to validate the performance of the end-to-end DevOps pipeline, always. KPIs include the time for a new change to be deployed, the frequency of deliveries and the number of times changes fail to pass the tests for any stage in the DevOps pipeline.

    Design-for-DevOps Practices for DevOps

    • Products are architected to support modular independent packaging, testing and releases. In other words, the product itself is partitioned into modules with minimal dependencies between modules. In this way, the modules can be built, tested and released without requiring the entire product to be built, tested and released all at once.
    • Applications are architected as modular, immutable microservices ready for deployment in cloud infrastructures in accordance with the tenets of 12-factor apps, rather than monolithic, mutable architectures.
    • Software source code changes are pre-checked with static analysis tools, prior to commit to the integration branch. Static analysis tools are used to ensure the modified source code does not introduce critical software faults such as memory leaks, uninitialized variables, and array-boundary problems.
    • Software code changes are pre-checked using peer code reviews prior to commit to the integration/trunk branch.
    • Software code changes are pre-checked with dynamic analysis tests prior to committing to the integration/trunk branch to ensure the software performance has not degraded.
    • Software changes are integrated in a private environment, together with the most recent integration branch version, and tested using functional testing prior to committing the software changes to the integration/trunk branch.
    • Software features are tagged with software switches (i.e., feature tags or toggles) during check-in to enable selective feature-level testing, promotion and reverts.
    • Automated test cases are checked in to the integration branch at the same time code changes are checked in, together with evidence that the tests passed in a pre-flight test environment.
    • Developers commit their code changes regularly, at least once per day.

    Continuous Integration Practices for DevOps

    • A software version management (SVM) system is used to manage all source code changes. (Git, Perforce, Mercurial, etc.)
    • A software version management (SVM) system is used to manage all versions of code images changes used by the build process. (Git, Perforce, Mercurial, etc.)
    • A software version management (SVM) system is used to manage all versions of tools and infrastructure configurations and tests that are used in the build process. (Git, Perforce, Mercurial, etc.)
    • All production software changes are maintained in a single trunk or integration branch of the code.
    • The software version(s) for supporting each customer release are maintained in a separate release branch to support software updated for each release.
    • Every software commit automatically triggers a build process for all components of the module that has code changed by the commit. The system is engineered such that resources are always sufficient to execute a build.
    • Once triggered, the software build process is fully automated and produces build artifacts, provided the build time checks are successful.
    • The automated build process checks include unit tests.
    • Resources for builds are available on-demand and never block a build.
    • CI builds are fast enough to complete incremental builds in less than an hour.
    • The build process and resources for builds scale up and down automatically according to the complexity of the change. If a full build is required, the CI system automatically scales horizontally to ensure the builds are completed as quickly as possible.

    Continuous Testing Practices for DevOps

    • Development changes are pre-flight tested in a clone of the production environment prior to being integrated to the trunk branch. (Note: “production environment” means “variations of customer configurations of a product.”)
    • New unit and functional regression tests that are necessary to test a software change are created together with the code and integrated into the trunk branch at the same time the code is. The new tests are then used to test the code after integration.
    • A test script standard is used to guide test script creation, to ensure the scripts are performing the intended test purpose and are maintainable.
    • Tests are selected automatically according to the specific software changes. CT is orchestrated dynamically, whereby the execution of portions of the CT test suites may be accelerated, or skipped entirely, depending on how complex or risky the software changes are.
    • Test resources are scaled automatically according to the resource requirements of specific tests selected and the available time for testing.
    • Release regression tests are automated. At least 85% of the tests are fully automated and the remaining are auto-assisted if portions must be performed manually.
    • Release performance tests are automated to verify that no unacceptable degradations are released.
    • Blue/green testing methods are used to verify deployments in a staging environment before activating the environment to live. A/B testing methods are used together with feature toggles to try different versions of code with customers in separate live environments. Canary testing methods are used to try new code versions on selected live environments.
    • The entire testing life cycle, which may include pre-flight, integration, regression, performance and release acceptance tests are automatically orchestrated across the DevOps pipeline. The test suites for each phase include a predefined set of tests that may be selected automatically according to predefined criteria.

    Elastic Infrastructure Practices for DevOps

    • The data and executable files needed for building and testing builds are automatically archived frequently and can be reinstated on demand. Archives include all release and integration repositories. If an older version of a build needs to be updated, then the environment for building and testing that version can be retrieved and reinstated on demand and can be accomplished in a short time (for example, minutes to hours.)
    • Build and test processes are flexible enough to automatically handle a wide variety of exceptions gracefully. If the build or test process for a component is unable to complete, then the process for that failed component is reported and automatically scheduled for analysis, but build and test processes for other components continue. The reasons for the component failure are automatically analyzed and rescheduled if the reason for the failure can be corrected by the system; if not, then it is reported and suspended.
    • System configuration management and system inventory is stored and maintained in a configuration management database (CMDB).
    • Infrastructure changes are managed and automated using configuration management tools that assure idempotency.
    • Automated tools are used to support immutable infrastructure deployments.
    • Equal performance for all. The user performance experience of the build and test processes by different teams are consistent for all users, independent of location or other factors. There are SLAs and monitoring tools that ensure the user performance experience is consistent for all users.
    • Fault recovery mechanisms are provided. Build and test system fault monitoring, fault detection, system and data monitoring and recovery mechanisms exist. They are automated and are consistently verified through simulated failure conditions.
    • Infrastructure failure modes are frequently tested.
    • Disaster recovery procedures are automated.

    Continuous Monitoring Practices for DevOps

    • Logging and proactive alert systems make it easy to detect and correct DevOps system failures. Logs and proactive system alerts are in place for most DevOps component failures, and are organized in a manner to quickly identify the highest-priority problems.
    • Snapshot and trend results of each metric from each DevOps pipeline stage (for example, builds, artifacts, tests) are automatically calculated in process and visible to everyone in the Dev, QA and Ops Teams.
    • Key performance indicators (KPIs) for the DevOps infrastructure components are automatically gathered, calculated and made visible to anyone on the team that subscribes to them. Example metrics are availability (uptime) of computing resources for CI, CT and CD processes, time to complete builds, time to complete tests, number of commits that fail and number of changes that need to be reverted due to serious failures.
    • Metrics and thresholds for DevOps infrastructure components are automatically gathered, calculated and made visible to anyone on the team that subscribes to them. Example metrics are availability (uptime) of computing resources for CI, CT and CD processes, time to complete builds, time to complete tests, number of commits that fail and number of changes that need to be reverted due to serious failures.
    • Process analytics are used to monitor and improve the integration, test and release process. Descriptive build and test analytics drive process improvements.
    • Predictive analytics are used to dynamically adjust DevOps pipeline configurations. For analysis of test results, data may indicate a need to concentrate more testing in areas that have a higher failure trend.

    Continuous Security Practices for DevOps

    • Developers are empowered and trained to take personal responsibility for security.
    • Security assurance automation and security monitoring practices are embraced by the organization.
    • All information security platforms that are in use expose full functionality via APIs for automation capability.
    • Proven version control practices and tools are used for all application software, scripts, templates and blueprints that are used in DevOps environments.
    • Immutable infrastructure mindsets are adopted to ensure production systems are locked down.
    • Security controls are automated so as not to impede DevOps agility.
    • Security tools are integrated into the CI/CD pipeline.
    • Source code for key intellectual property on build or test machines are only accessible by trusted users with verified credentials. Build and test scripts do not contain credentials for access to any system that has intellectual property. Intellectual Property is divided such that not all of it exists on the same archive and each archive has different credentials.

    Continuous Delivery Practices for DevOps

    • Delivery and deployment stages are separate. The delivery stage precedes the deployment pipeline.
    • All deliverables that pass the delivery metrics are packaged and prepared for deployment using containers.
    • Deliverable packages include sufficient configuration and test data to validate each deployment. Configuration management tools are used to manage configuration information.
    • Deliverables from the delivery pipeline are automatically pushed to the deployment pipeline, once acceptable delivery measures are achieved.
    • Deployment decisions are determined according to predetermined metrics. The entire deployment process may take hours, but usually less than a day.
    • Deployments to production environments are staged such that failed deployments can be detected early and impact to customers isolated quickly.
    • Deployments are arranged with automated recovery and self-healing capabilities in case a deployment fails.

    What This Means

    DevOps is a powerful tool that enables many benefits for organizations that use it. Achieving performance efficiently with DevOps depends on following best practices. By following the nine pillars of practices enumerated in this blog, organizations can achieve the performance potential that DevOps has to offer.


    Refer: https://devops.com/nine-pillars-of-devops-best-practices/

    Sunday, January 17, 2021

    Continuous Security Best Practices

    9 Pillars of Continuous Security Best Practices

    1. Leadership 
    2. Collaborative Culture
    3. Design for DevOps
    4. Continuous Integration
    5. Continuous Testing
    6. Continuous Monitoring
    7. Continuous Security
    8. Elastic Infrastructure
    9. Continuous Delivery/Deployment
    Refer: https://devops.com/9-pillars-of-continuous-security-best-practices/

    Thursday, December 31, 2020

    Open Source Miracles : Semgrep

    Semgrep is a lightweight static analysis for many languages. Find bug variants with patterns that look like source code. This Open Source tool can be used for SAST (Static Application Security Testing) by the Developers and Security Engineers.

    Semgrep is a fast, open-source, static analysis tool that excels at expressing code standards — without complicated queries — and surfacing bugs early at editor, commit, and CI time. Precise rules look like the code you’re searching; no more traversing abstract syntax trees or wrestling with regexes.

    Refer: https://github.com/returntocorp/semgrep

    Monday, October 1, 2018

    Do you want to access your computer and terminal via the Web?

    Do you want to access your computer and terminal via the Web?

    I mean via your web browser for the right troubleshooting and do everything via HTTP

    Apache Guacamole is a clientless remote desktop gateway. It supports standard protocols like VNC, RDP, and SSH.

    https://guacamole.apache.org/

    GoTTY - Share your terminal as a web application, GoTTY is a simple command-line tool that turns your CLI tools into web applications.

    https://github.com/yudai/gotty

    Sunday, May 13, 2018

    A few of my experiences as a Security Architect

    A few of my experiences as a Security Architect

    I got an opportunity to work a Security Architect role as well apart from my usual Software Architect or Leadership role in multiple organizations and wanted to mention a few of my experiences which can be beneficial for my own reference and others too who are practitioners...


    1. Embed Security as a practice in all phases of SDLC or Agile projects
      1. Security requirements tracking
      2. Security Threat modeling to achieve Secure Design & Architecture, based on the ranking we provide to risks, try to do risk mitigations and auditing in later part
      3. Make sure to have Secure Infra Design, Secure Product implementation, Secure Deployment or DevSecOps, and finally have a Secure Operations team who can monitor and report
      4. Automate the processes or steps whatever you can from the above-mentioned steps like
        1. An automated way of doing SAST (Static Application Security Testing)
        2. An automated way of doing DAST (Dynamic Application Security Testing)
        3. Apart from the above DAST, see if you can automate any other aspects of your product-specific Security testing
        4. An automated way of finding Open Source Software and Third Party libraries Vulnerability Assessment
        5. An automated way of doing Infracture Technology Hardening
        6. An automated way of doing Secure Configurations
        7. An automated way of doing Secure Deployments and leveraging DevSecOps processes
        8. An automated way of doing Monitoring and providing meaningful Security Analytics
        9. An automated way of doing Alerts and Notifications
        10. An automated way of raising Security bugs for tracking and closure of the raised earlier bugs
      5. Finally, feed some of the above and maybe not covered in above as documents or information for the purposes of Risk Management and Governance & Compliance requirements; Also, these details are really needed for Auditing purposes as well
    2. Apart from the above, have a dedicated RED Team within your organization who can manage the Security Tools chain and also perform 'defense in depth' with respect to overall Information Security and all layers in your product in a wing-to-wing manner
    3. Many times, it's a good idea to have external Penetration Testing performed by expert cybersecurity professionals before the release of the product to end customers
    4. Some of the best practices which I follow thoroughly to have a product very secure enough are
      1. Always, change the default passwords or keys of your devices or products so that those configurations are only known to you
      2. Have a Roles Based Access Control (RBAC) defined for your product and always assign the very least privilege or role possible for a given user of your product
      3. Don't trust all the machine/user to machine/user connections are trusted with respect to your product or your APIs; Have necessary OAuth or two-factor authentication implemented; Have necessary secure certificates while communicating
      4. Don't reinvent the wheel just because of Securing your products; Have a very simple reliable and resilient approach and use existing tools to achieve Security in your products. Try to see if there is already an available proven toolchain or component which can be leveraged
      5. Use generic centralized logging within your product components (like web apps, APIs, other Data, and Services) to capture the right sort of events messages which can be leveraged for Security Monitoring and then, in turn, provide meaningful insights
    5. Moreover,  implement an awareness program with the right set of documents, details, workshops, pieces of training so that the various groups in the organization are aware of Information Security and understand and practice them in their day to day product development activities


    References:
    1. https://www.owasp.org/index.php/Category:Principle
    2. https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-27ra.pdf

    Happy reading and learning! :)

    Sunday, November 6, 2016

    IoT : The Internet of Things

     The Internet of Things is an emerging topic of technical, social, and economic significance. Consumer products, durable goods, cars and trucks, industrial and utility components, sensors, and other everyday objects are being combined with Internet connectivity and powerful data analytic capabilities that promise to transform the way we work, live, and play. Projections for the impact of IoT on the Internet and economy are impressive, with some anticipating as many as 100 billion connected IoT devices and a global economic impact of more than $11 trillion by 2025.

    At the same time, however, the Internet of Things raises significant challenges that could stand in the way of realizing its potential benefits. Attention-grabbing headlines about the hacking of Internet-connected devices, surveillance concerns, and privacy fears already have captured public attention. Technical challenges remain and new policy, legal and development challenges are emerging.

    This overview document is designed to help the Internet Society community navigate the dialogue surrounding the Internet of Things in light of the competing predictions about its promises and perils. The Internet of Things engages a broad set of ideas that are complex and intertwined from different perspectives. Key concepts that serve as a foundation for exploring the opportunities and challenges of IoT include:

    • IoT Definitions: The term Internet of Things generally refers to scenarios where network connectivity and computing capability extends to objects, sensors and everyday items not normally considered computers, allowing these devices to generate, exchange and consume data with minimal human intervention. There is, however, no single, universal definition.
    • Enabling Technologies: The concept of combining computers, sensors, and networks to monitor and control devices has existed for decades. The recent confluence of several technology market trends, however, is bringing the Internet of Things closer to widespread reality. These include Ubiquitous ConnectivityWidespread Adoption of IP-based NetworkingComputing EconomicsMiniaturizationAdvances in Data Analytics, and the Rise of Cloud Computing.
    • Connectivity Models: IoT implementations use different technical communications models, each with its own characteristics. Four common communications models described by the Internet Architecture Board include: Device-to-Device, Device-to-Cloud, Device-to-Gateway, and Back-End Data-Sharing. These models highlight the flexibility in the ways that IoT devices can connect and provide value to the user.
    • Transformational Potential: If the projections and trends towards IoT become reality, it may force a shift in thinking about the implications and issues in a world where the most common interaction with the Internet comes from passive engagement with connected objects rather than active engagement with content. The potential realization of this outcome – a “hyperconnected world” — is testament to the general-purpose nature of the Internet architecture itself, which does not place inherent limitations on the applications or services that can make use of the technology.

    Five key IoT issue areas are examined to explore some of the most pressing challenges and questions related to the technology. These include security; privacy; interoperability and standards; legal, regulatory, and rights; and emerging economies and development.

    Security

    While security considerations are not new in the context of information technology, the attributes of many IoT implementations present new and unique security challenges. Addressing these challenges and ensuring security in IoT products and services must be a fundamental priority.Users need to trust that IoT devices and related data services are secure from vulnerabilities, especially as this technology become more pervasive and integrated into our daily lives. Poorly secured IoT devices and services can serve as potential entry points for cyber attack and expose user data to theft by leaving data streams inadequately protected.

    The interconnected nature of IoT devices means that every poorly secured device that is connected online potentially affects the security and resilience of the Internet globally. This challenge is amplified by other considerations like the mass-scale deployment of homogenous IoT devices, the ability of some devices to automatically connect to other devices, and the likelihood of fielding these devices in unsecure environments.

    As a matter of principle, developers and users of IoT devices and systems have a collective obligation to ensure they do not expose users and the Internet itself to potential harm. Accordingly, a collaborative approach to security will be needed to develop effective and appropriate solutions to IoT security challenges that are well suited to the scale and complexity of the issues.

    Privacy

    The full potential of the Internet of Things depends on strategies that respect individual privacy choices across a broad spectrum of expectations. The data streams and user specificity afforded by IoT devices can unlock incredible and unique value to IoT users, but concerns about privacy and potential harms might hold back full adoption of the Internet of Things. This means that privacy rights and respect for user privacy expectations are integral to ensuring user trust and confidence in the Internet, connected devices, and related services.

    Indeed, the Internet of Things is redefining the debate about privacy issues, as many implementations can dramatically change the ways personal data is collected, analyzed, used, and protected. For example, IoT amplifies concerns about the potential for increased surveillance and tracking, difficulty in being able to opt out of certain data collection, and the strength of aggregating IoT data streams to paint detailed digital portraits of users. While these are important challenges, they are not insurmountable. In order to realize the opportunities, strategies will need to be developed to respect individual privacy choices across a broad spectrum of expectations, while still fostering innovation in new technology and services.

    Interoperability / Standards

    A fragmented environment of proprietary IoT technical implementations will inhibit value for users and industry. While full interoperability across products and services is not always feasible or necessary, purchasers may be hesitant to buy IoT products and services if there is integration inflexibility, high ownership complexity, and concern over vendor lock-in.

    In addition, poorly designed and configured IoT devices may have negative consequences for the networking resources they connect to and the broader Internet. Appropriate standards, reference models, and best practices also will help curb the proliferation of devices that may act in disrupted ways to the Internet. The use of generic, open, and widely available standards as technical building blocks for IoT devices and services (such as the Internet Protocol) will support greater user benefits, innovation, and economic opportunity.

    Legal, Regulatory and Rights

    The use of IoT devices raises many new regulatory and legal questions as well as amplifies existing legal issues around the Internet. The questions are wide in scope, and the rapid rate of change in IoT technology frequently outpaces the ability of the associated policy, legal, and regulatory structures to adapt.

    One set of issues surrounds crossborder data flows, which occur when IoT devices collect data about people in one jurisdiction and transmit it to another jurisdiction with different data protection laws for processing. Further, data collected by IoT devices is sometimes susceptible to misuse, potentially causing discriminatory outcomes for some users. Other legal issues with IoT devices include the conflict between law enforcement surveillance and civil rights; data retention and destruction policies; and legal liability for unintended uses, security breaches or privacy lapses.

    While the legal and regulatory challenges are broad and complex in scope, adopting the guiding Internet Society principles of promoting a user’s ability to connect, speak, innovate, share, choose, and trust are core considerations for evolving IoT laws and regulations that enable user rights.

    Emerging Economy and Development Issues

    The Internet of Things holds significant promise for delivering social and economic benefits to emerging and developing economies. This includes areas such as sustainable agriculture, water quality and use, healthcare, industrialization, and environmental management, among others. As such, IoT holds promise as a tool in achieving the United Nations Sustainable Development Goals.

    The broad scope of IoT challenges will not be unique to industrialized countries. Developing regions also will need to respond to realize the potential benefits of IoT. In addition, the unique needs and challenges of implementation in less-developed regions will need to be addressed, including infrastructure readiness, market and investment incentives, technical skill requirements, and policy resources.

    The Internet of Things is happening now. It promises to offer a revolutionary, fully connected “smart” world as the relationships between objects, their environment, and people become more tightly intertwined. Yet the issues and challenges associated with IoT need to be considered and addressed in order for the potential benefits for individuals, society, and the economy to be realized.

    Ultimately, solutions for maximizing the benefits of the Internet of Things while minimizing the risks will not be found by engaging in a polarized debate that pits the promises of IoT against its possible perils. Rather, it will take informed engagement, dialogue, and collaboration across a range of stakeholders to plot the most effective ways forward.

    Refer for more details: https://www.internetsociety.org/wp-content/uploads/2017/08/ISOC-IoT-Overview-20151221-en.pdf