• 全部 32
  • 公开 31
  • 私有 1
  • 语言
  • C 11
  • C# 1
  • C++ 4
  • GO 1
  • HTML 3
  • Java 3
  • Javascript 1
  • PHP 1
  • Python 2
  • Ruby 1
  • Shell 4
  • 32个项目结果



    Build environment and train a robot arm from scratch (Reinforcement Learning) https://morvanzhou.github.io/tutorial… Source: https://github.com/MorvanZhou/train-robot-arm-from-scratch.git.




    Vagrantfile to create a Linux virtual machine with a full GCC ARM toolchain for compiling ARM code and flashing it with tools like OpenOCD & STLink. Using this virtual machine you can get setup to compile and load ARM code from any platform that Vagrant supports (Windows, Mac OSX, Linux). Source: https://github.com/adafruit/ARM-toolchain-vagrant.git.




    Apache ServiceComb Saga 是一个微服务应用的数据最终一致性解决方案。 特性 高可用。支持集群模式。 高可靠。所有的事务事件都持久存储在数据库中。 高性能。事务事件是通过gRPC来上报的,且事务的请求信息是通过Kyro进行序列化和反序列化的。 低侵入。仅需2-3个注解和编写对应的补偿方法即可进行分布式事务。 部署简单。可通过Docker快速部署。 支持前向恢复(重试)及后向恢复(补偿)。 扩展简单。基于Pack架构很容实现多种协调机制。




    README This README would normally document whatever steps are necessary to get the application up and running. Things you may want to cover: Ruby version System dependencies Configuration Database creation Database initialization How to run the test suite Services (job queues, cache servers, search engines, etc.) Deployment instructions … Please feel free to use a different markup language if you do not plan to run rake doc:app.




    vaeThink 是搭载了两大国产开源框架ThinkPHP5和Layui2的一款高速度、轻量级PHP内容管理系统,诞生的初衷是帮助开发者提高开发效率,节省项目开发成本。 http://vaethink.com




    Features Le- venberg-Marquardt, LBFGS, Riemannian Trust Region, Nesterov's accelerated gradient descent algorithms - GPU acceleration using CUDA - Fast and accurate interferometric calibration - Gaussian and Student's t noise models - Shapelet source models - CASA MS data format supported - Distributed calibration using MPI - consensus optimization with data multiplexing - Tools to build sky models and restore sky models to images - Adaptive update of ADMM penalty (Barzilai-Borwein a.k.a. Spectral method) Read INSTALL for installation. This file gives a brief guide to use SAGECal. Warning: this file may be obsolete. use sagecal -h to see up-to-date options.




    Casacore A suite of c++ libraries for radio astronomy data processing. Installation Obtaining the source The casacore source code is maintained on github. You can obtain it using: $ git clone https://github.com/casacore/casacore Requirements To compile casacore you need to meet the following requirements: cmake gfortran g++ flex bison blas lapack cfitsio (3.181 or later) wcslib (4.20 or later) sofa (optional, only for testing casacore measures) fftw3 (optional) hdf5 (optional) numpy (optional) boost-python (optional) ncurses (optional) On Debian / Ubuntu you can install these with: $ sudo apt-get install build-essential cmake gfortran g++ libncurses5-dev \ libreadline-dev flex bison libblas-dev liblapacke-dev libcfitsio3-dev \ wcslib-dev and the optional libraries: $ sudo apt-get install libhdf5-serial-dev libfftw3-dev python-numpy \ libboost-python-dev libpython3.4-dev libpython2.7-dev On CentOS7 you can install these with: $ sudo yum install cmake cmake-gui gcc-gfortran gcc-c++ flex bison \ blas blas-devel lapack lapack-devel cfitsio cfitsio-devel \ wcslib wcslib-devel ncurses ncurses-devel readline readline-devel\ python-devel boost boost-devel fftw fftw-devel hdf5 hdf5-devel\ numpy boost-python Obtaining measures data Various parts of casacore require measures data, which requires regular updating. You can obtain the WSRT measures archive from the ASTRON FTP server: ftp://ftp.astron.nl/outgoing/Measures/ Extract this somewhere on a permanent location on your filesystem. Compilation In the casacore source folder run: mkdir build cd build cmake .. make make install there are various flags available to cmake to enable and disable options: $ cmake -DUSE_FFTW3=ON -DDATA_DIR=/usr/share/casacore/data -DUSE_OPENMP=ON \ -DUSE_HDF5=ON -DBUILD_PYTHON=ON -DUSE_THREADS=ON The DATA_DIR should point to the location where you extracted the measures data. Special variables %CASAROOT% and %CASAHOME% can be used here, which can be set at run time through the .casarc file. We have expirmental support for Python3 now. You can build python3 support using -DBUILD_PYTHON3=on. Note that CMake may have problems detecting the correct python3 libraries and headers, so probably you need to set them manually. For example: -DPYTHON3_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.4m.so -DPYTHON3_INCLUDE_DIR=/usr/include/python3.4 To configure Python2 specific settings use: PYTHON2_EXECUTABLE PYTHON2_LIBRARY PYTHON2_INCLUDE_DIR To configure Python3 specific settings use: PYTHON3_EXECUTABLE PYTHON3_LIBRARY PYTHON3_INCLUDE_DIR If you run into problems with boost libraries, try setting -DBoost_NO_BOOST_CMAKE=True. This will be necessary if you have the libraries from NRAO casa in your PATH or LD_LIBRARY_PATH. Ubuntu packages Casacore is part of the kern suite, which supplies precompiled binaries for Ubuntu 14.04 and 16.04 Documentation http://casacore.github.io/casacore Problems & bugs If you have any issues compiling or using casacore, please open an issue on the issue tracker on github. If you have patches please open a pull request. Your contributions are more than welcome! But to maintain a high code quality we have written a contribution manual, please read that first.




    Apache CarbonData is an indexed columnar data store solution for fast analytics on big data platform, e.g.Apache Hadoop, Apache Spark, etc. You can find the latest CarbonData document and learn more at: http://carbondata.apache.org ####Features CarbonData file- format is a columnar store in HDFS, it has many features that a modern columnar format has, such as splittable, compression schema ,complex data type etc, and CarbonData has following unique features: - Stores data along with index: it can significantly accelerate query performance and reduces the I/O scans and CPU resources, where there are filters in the query. CarbonData index consists of multiple level of indices, a processing framework can leverage this index to reduce the task it needs to schedule and process, and it can also do skip scan in more finer grain unit (called blocklet) in task side scanning instead of scanning the whole file. - Operable encoded data :Through supporting efficient compression and global encoding schemes, can query on compressed/encoded data, the data can be converted just before returning the results to the users, which is "late materialized". Supports for various use cases with one single Data format : like interactive OLAP-style query, Sequential Access (big scan), Random Access (narrow scan).




    Ruby Study





    https://blog.sina.cn/dpool/blog/s/blog_55e4ab170100qlhn.html https://qaz52e.blog.sohu.cn/326874666.html



    +86-010-68208678 liyang@opengcc.org 北京市海淀区万寿路27号

    版权所有:绿色计算产业联盟    京ICP备06019433号-12    京ICP备06019433号-13    京ICP备06019433号-15    Powerd by Trustie